Sep 9 05:27:09.838191 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 03:39:34 -00 2025 Sep 9 05:27:09.838212 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:27:09.838224 kernel: BIOS-provided physical RAM map: Sep 9 05:27:09.838231 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 9 05:27:09.838237 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 9 05:27:09.838244 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Sep 9 05:27:09.838252 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 9 05:27:09.838258 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Sep 9 05:27:09.838268 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 9 05:27:09.838275 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 9 05:27:09.838281 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 9 05:27:09.838290 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 9 05:27:09.838297 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 9 05:27:09.838304 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 9 05:27:09.838312 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 9 05:27:09.838320 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 9 05:27:09.838332 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 05:27:09.838339 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 05:27:09.838347 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 05:27:09.838354 kernel: NX (Execute Disable) protection: active Sep 9 05:27:09.838361 kernel: APIC: Static calls initialized Sep 9 05:27:09.838368 kernel: e820: update [mem 0x9a13e018-0x9a147c57] usable ==> usable Sep 9 05:27:09.838375 kernel: e820: update [mem 0x9a101018-0x9a13de57] usable ==> usable Sep 9 05:27:09.838382 kernel: extended physical RAM map: Sep 9 05:27:09.838389 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Sep 9 05:27:09.838397 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Sep 9 05:27:09.838404 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Sep 9 05:27:09.838413 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Sep 9 05:27:09.838420 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a101017] usable Sep 9 05:27:09.838429 kernel: reserve setup_data: [mem 0x000000009a101018-0x000000009a13de57] usable Sep 9 05:27:09.838438 kernel: reserve setup_data: [mem 0x000000009a13de58-0x000000009a13e017] usable Sep 9 05:27:09.838447 kernel: reserve setup_data: [mem 0x000000009a13e018-0x000000009a147c57] usable Sep 9 05:27:09.838456 kernel: reserve setup_data: [mem 0x000000009a147c58-0x000000009b8ecfff] usable Sep 9 05:27:09.838465 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Sep 9 05:27:09.838474 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Sep 9 05:27:09.838483 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Sep 9 05:27:09.838492 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Sep 9 05:27:09.838501 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Sep 9 05:27:09.838513 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Sep 9 05:27:09.838522 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Sep 9 05:27:09.838536 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Sep 9 05:27:09.838545 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 05:27:09.838555 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 05:27:09.838565 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 05:27:09.838577 kernel: efi: EFI v2.7 by EDK II Sep 9 05:27:09.838587 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Sep 9 05:27:09.838607 kernel: random: crng init done Sep 9 05:27:09.838616 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 9 05:27:09.838626 kernel: secureboot: Secure boot enabled Sep 9 05:27:09.838635 kernel: SMBIOS 2.8 present. Sep 9 05:27:09.838644 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 05:27:09.838653 kernel: DMI: Memory slots populated: 1/1 Sep 9 05:27:09.838663 kernel: Hypervisor detected: KVM Sep 9 05:27:09.838673 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 05:27:09.838682 kernel: kvm-clock: using sched offset of 6902540272 cycles Sep 9 05:27:09.838696 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 05:27:09.838706 kernel: tsc: Detected 2794.748 MHz processor Sep 9 05:27:09.838716 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 05:27:09.838726 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 05:27:09.838736 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Sep 9 05:27:09.838746 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 05:27:09.838759 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 05:27:09.838772 kernel: Using GB pages for direct mapping Sep 9 05:27:09.838784 kernel: ACPI: Early table checksum verification disabled Sep 9 05:27:09.838797 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Sep 9 05:27:09.838824 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 05:27:09.838834 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:27:09.838844 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:27:09.838853 kernel: ACPI: FACS 0x000000009BBDD000 000040 Sep 9 05:27:09.838863 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:27:09.838873 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:27:09.838882 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:27:09.838892 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 05:27:09.838905 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 05:27:09.838915 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Sep 9 05:27:09.838925 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Sep 9 05:27:09.838935 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Sep 9 05:27:09.838957 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Sep 9 05:27:09.838968 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Sep 9 05:27:09.838978 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Sep 9 05:27:09.838987 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Sep 9 05:27:09.838997 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Sep 9 05:27:09.839011 kernel: No NUMA configuration found Sep 9 05:27:09.839021 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Sep 9 05:27:09.839031 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Sep 9 05:27:09.839040 kernel: Zone ranges: Sep 9 05:27:09.839050 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 05:27:09.839059 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Sep 9 05:27:09.839066 kernel: Normal empty Sep 9 05:27:09.839074 kernel: Device empty Sep 9 05:27:09.839081 kernel: Movable zone start for each node Sep 9 05:27:09.839091 kernel: Early memory node ranges Sep 9 05:27:09.839099 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Sep 9 05:27:09.839107 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Sep 9 05:27:09.839114 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Sep 9 05:27:09.839122 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Sep 9 05:27:09.839129 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Sep 9 05:27:09.839137 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Sep 9 05:27:09.839144 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 05:27:09.839152 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Sep 9 05:27:09.839162 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 05:27:09.839170 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 05:27:09.839208 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 05:27:09.839225 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Sep 9 05:27:09.839247 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 05:27:09.839257 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 05:27:09.839265 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 05:27:09.839272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 05:27:09.839280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 05:27:09.839293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 05:27:09.839310 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 05:27:09.839326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 05:27:09.839349 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 05:27:09.839359 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 05:27:09.839367 kernel: TSC deadline timer available Sep 9 05:27:09.839374 kernel: CPU topo: Max. logical packages: 1 Sep 9 05:27:09.839382 kernel: CPU topo: Max. logical dies: 1 Sep 9 05:27:09.839390 kernel: CPU topo: Max. dies per package: 1 Sep 9 05:27:09.839407 kernel: CPU topo: Max. threads per core: 1 Sep 9 05:27:09.839415 kernel: CPU topo: Num. cores per package: 4 Sep 9 05:27:09.839423 kernel: CPU topo: Num. threads per package: 4 Sep 9 05:27:09.839433 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 05:27:09.839443 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 05:27:09.839451 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 05:27:09.839459 kernel: kvm-guest: setup PV sched yield Sep 9 05:27:09.839467 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 05:27:09.839478 kernel: Booting paravirtualized kernel on KVM Sep 9 05:27:09.839486 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 05:27:09.839494 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 05:27:09.839502 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 05:27:09.839510 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 05:27:09.839518 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 05:27:09.839525 kernel: kvm-guest: PV spinlocks enabled Sep 9 05:27:09.839533 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 05:27:09.839543 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:27:09.839554 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 05:27:09.839562 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 05:27:09.839570 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 05:27:09.839578 kernel: Fallback order for Node 0: 0 Sep 9 05:27:09.839586 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Sep 9 05:27:09.839602 kernel: Policy zone: DMA32 Sep 9 05:27:09.839611 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 05:27:09.839618 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 05:27:09.839628 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 05:27:09.839636 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 05:27:09.839644 kernel: Dynamic Preempt: voluntary Sep 9 05:27:09.839652 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 05:27:09.839661 kernel: rcu: RCU event tracing is enabled. Sep 9 05:27:09.839669 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 05:27:09.839678 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 05:27:09.839686 kernel: Rude variant of Tasks RCU enabled. Sep 9 05:27:09.839693 kernel: Tracing variant of Tasks RCU enabled. Sep 9 05:27:09.839702 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 05:27:09.839712 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 05:27:09.839720 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:27:09.839729 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:27:09.839740 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 05:27:09.839748 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 05:27:09.839756 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 05:27:09.839764 kernel: Console: colour dummy device 80x25 Sep 9 05:27:09.839772 kernel: printk: legacy console [ttyS0] enabled Sep 9 05:27:09.839779 kernel: ACPI: Core revision 20240827 Sep 9 05:27:09.839790 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 05:27:09.839798 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 05:27:09.839822 kernel: x2apic enabled Sep 9 05:27:09.839830 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 05:27:09.839838 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 05:27:09.839846 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 05:27:09.839854 kernel: kvm-guest: setup PV IPIs Sep 9 05:27:09.839862 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 05:27:09.839870 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 05:27:09.839882 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 05:27:09.839889 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 05:27:09.839897 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 05:27:09.839905 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 05:27:09.839916 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 05:27:09.839924 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 05:27:09.839933 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 05:27:09.839943 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 05:27:09.839954 kernel: active return thunk: retbleed_return_thunk Sep 9 05:27:09.839963 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 05:27:09.839972 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 05:27:09.839980 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 05:27:09.839988 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 05:27:09.839996 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 05:27:09.840004 kernel: active return thunk: srso_return_thunk Sep 9 05:27:09.840012 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 05:27:09.840020 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 05:27:09.840031 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 05:27:09.840039 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 05:27:09.840047 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 05:27:09.840055 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 05:27:09.840063 kernel: Freeing SMP alternatives memory: 32K Sep 9 05:27:09.840070 kernel: pid_max: default: 32768 minimum: 301 Sep 9 05:27:09.840078 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 05:27:09.840086 kernel: landlock: Up and running. Sep 9 05:27:09.840094 kernel: SELinux: Initializing. Sep 9 05:27:09.840104 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:27:09.840112 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 05:27:09.840121 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 05:27:09.840129 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 05:27:09.840137 kernel: ... version: 0 Sep 9 05:27:09.840147 kernel: ... bit width: 48 Sep 9 05:27:09.840155 kernel: ... generic registers: 6 Sep 9 05:27:09.840163 kernel: ... value mask: 0000ffffffffffff Sep 9 05:27:09.840171 kernel: ... max period: 00007fffffffffff Sep 9 05:27:09.840181 kernel: ... fixed-purpose events: 0 Sep 9 05:27:09.840189 kernel: ... event mask: 000000000000003f Sep 9 05:27:09.840197 kernel: signal: max sigframe size: 1776 Sep 9 05:27:09.840205 kernel: rcu: Hierarchical SRCU implementation. Sep 9 05:27:09.840213 kernel: rcu: Max phase no-delay instances is 400. Sep 9 05:27:09.840221 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 05:27:09.840229 kernel: smp: Bringing up secondary CPUs ... Sep 9 05:27:09.840237 kernel: smpboot: x86: Booting SMP configuration: Sep 9 05:27:09.840245 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 05:27:09.840256 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 05:27:09.840264 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 05:27:09.840272 kernel: Memory: 2409220K/2552216K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54076K init, 2892K bss, 137064K reserved, 0K cma-reserved) Sep 9 05:27:09.840280 kernel: devtmpfs: initialized Sep 9 05:27:09.840288 kernel: x86/mm: Memory block size: 128MB Sep 9 05:27:09.840299 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Sep 9 05:27:09.840310 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Sep 9 05:27:09.840327 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 05:27:09.840335 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 05:27:09.840351 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 05:27:09.840365 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 05:27:09.840373 kernel: audit: initializing netlink subsys (disabled) Sep 9 05:27:09.840381 kernel: audit: type=2000 audit(1757395626.522:1): state=initialized audit_enabled=0 res=1 Sep 9 05:27:09.840389 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 05:27:09.840397 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 05:27:09.840405 kernel: cpuidle: using governor menu Sep 9 05:27:09.840413 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 05:27:09.840423 kernel: dca service started, version 1.12.1 Sep 9 05:27:09.840431 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 05:27:09.840439 kernel: PCI: Using configuration type 1 for base access Sep 9 05:27:09.840447 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 05:27:09.840455 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 05:27:09.840463 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 05:27:09.840471 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 05:27:09.840479 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 05:27:09.840487 kernel: ACPI: Added _OSI(Module Device) Sep 9 05:27:09.840497 kernel: ACPI: Added _OSI(Processor Device) Sep 9 05:27:09.840505 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 05:27:09.840513 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 05:27:09.840521 kernel: ACPI: Interpreter enabled Sep 9 05:27:09.840529 kernel: ACPI: PM: (supports S0 S5) Sep 9 05:27:09.840537 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 05:27:09.840545 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 05:27:09.840553 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 05:27:09.840561 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 05:27:09.840570 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 05:27:09.840855 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 05:27:09.840989 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 05:27:09.841111 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 05:27:09.841122 kernel: PCI host bridge to bus 0000:00 Sep 9 05:27:09.841258 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 05:27:09.841372 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 05:27:09.841489 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 05:27:09.841609 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 05:27:09.841721 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 05:27:09.841912 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 05:27:09.842043 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 05:27:09.842207 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 05:27:09.842353 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 05:27:09.842476 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 05:27:09.842607 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 05:27:09.842730 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 05:27:09.842872 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 05:27:09.843017 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 05:27:09.843141 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 05:27:09.843269 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 05:27:09.843390 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 05:27:09.843529 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 05:27:09.843685 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 05:27:09.843839 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 05:27:09.843965 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 05:27:09.844108 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 05:27:09.844247 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 05:27:09.844373 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 05:27:09.844494 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 05:27:09.844625 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 05:27:09.844790 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 05:27:09.844957 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 05:27:09.845098 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 05:27:09.845228 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 05:27:09.845348 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 05:27:09.845483 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 05:27:09.845615 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 05:27:09.845627 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 05:27:09.845635 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 05:27:09.845644 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 05:27:09.845656 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 05:27:09.845664 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 05:27:09.845672 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 05:27:09.845680 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 05:27:09.845689 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 05:27:09.845697 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 05:27:09.845705 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 05:27:09.845713 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 05:27:09.845721 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 05:27:09.845731 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 05:27:09.845739 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 05:27:09.845747 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 05:27:09.845755 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 05:27:09.845763 kernel: iommu: Default domain type: Translated Sep 9 05:27:09.845771 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 05:27:09.845779 kernel: efivars: Registered efivars operations Sep 9 05:27:09.845788 kernel: PCI: Using ACPI for IRQ routing Sep 9 05:27:09.845796 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 05:27:09.845820 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Sep 9 05:27:09.845827 kernel: e820: reserve RAM buffer [mem 0x9a101018-0x9bffffff] Sep 9 05:27:09.845835 kernel: e820: reserve RAM buffer [mem 0x9a13e018-0x9bffffff] Sep 9 05:27:09.845843 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Sep 9 05:27:09.845851 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Sep 9 05:27:09.845981 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 05:27:09.846112 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 05:27:09.846235 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 05:27:09.846250 kernel: vgaarb: loaded Sep 9 05:27:09.846259 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 05:27:09.846267 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 05:27:09.846275 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 05:27:09.846283 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 05:27:09.846291 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 05:27:09.846299 kernel: pnp: PnP ACPI init Sep 9 05:27:09.846445 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 05:27:09.846457 kernel: pnp: PnP ACPI: found 6 devices Sep 9 05:27:09.846469 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 05:27:09.846478 kernel: NET: Registered PF_INET protocol family Sep 9 05:27:09.846486 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 05:27:09.846494 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 05:27:09.846502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 05:27:09.846510 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 05:27:09.846518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 05:27:09.846526 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 05:27:09.846536 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:27:09.846544 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 05:27:09.846552 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 05:27:09.846560 kernel: NET: Registered PF_XDP protocol family Sep 9 05:27:09.846696 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 05:27:09.846837 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 05:27:09.846952 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 05:27:09.847064 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 05:27:09.847173 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 05:27:09.847289 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 05:27:09.847413 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 05:27:09.847527 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 05:27:09.847538 kernel: PCI: CLS 0 bytes, default 64 Sep 9 05:27:09.847546 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 05:27:09.847554 kernel: Initialise system trusted keyrings Sep 9 05:27:09.847562 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 05:27:09.847571 kernel: Key type asymmetric registered Sep 9 05:27:09.847583 kernel: Asymmetric key parser 'x509' registered Sep 9 05:27:09.847615 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 05:27:09.847626 kernel: io scheduler mq-deadline registered Sep 9 05:27:09.847635 kernel: io scheduler kyber registered Sep 9 05:27:09.847643 kernel: io scheduler bfq registered Sep 9 05:27:09.847651 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 05:27:09.847660 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 05:27:09.847669 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 05:27:09.847677 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 05:27:09.847687 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 05:27:09.847696 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 05:27:09.847704 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 05:27:09.847713 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 05:27:09.847721 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 05:27:09.847729 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 05:27:09.847883 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 05:27:09.848003 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 05:27:09.848126 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T05:27:09 UTC (1757395629) Sep 9 05:27:09.848243 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 05:27:09.848254 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 05:27:09.848263 kernel: efifb: probing for efifb Sep 9 05:27:09.848271 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 05:27:09.848279 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 05:27:09.848287 kernel: efifb: scrolling: redraw Sep 9 05:27:09.848295 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 05:27:09.848304 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 05:27:09.848315 kernel: fb0: EFI VGA frame buffer device Sep 9 05:27:09.848326 kernel: pstore: Using crash dump compression: deflate Sep 9 05:27:09.848334 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 05:27:09.848343 kernel: NET: Registered PF_INET6 protocol family Sep 9 05:27:09.848351 kernel: Segment Routing with IPv6 Sep 9 05:27:09.848359 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 05:27:09.848370 kernel: NET: Registered PF_PACKET protocol family Sep 9 05:27:09.848378 kernel: Key type dns_resolver registered Sep 9 05:27:09.848386 kernel: IPI shorthand broadcast: enabled Sep 9 05:27:09.848394 kernel: sched_clock: Marking stable (3946003120, 139458268)->(4100930684, -15469296) Sep 9 05:27:09.848403 kernel: registered taskstats version 1 Sep 9 05:27:09.848411 kernel: Loading compiled-in X.509 certificates Sep 9 05:27:09.848419 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 884b9ad6a330f59ae6e6488b20a5491e41ff24a3' Sep 9 05:27:09.848427 kernel: Demotion targets for Node 0: null Sep 9 05:27:09.848438 kernel: Key type .fscrypt registered Sep 9 05:27:09.848448 kernel: Key type fscrypt-provisioning registered Sep 9 05:27:09.848456 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 05:27:09.848464 kernel: ima: Allocated hash algorithm: sha1 Sep 9 05:27:09.848473 kernel: ima: No architecture policies found Sep 9 05:27:09.848486 kernel: clk: Disabling unused clocks Sep 9 05:27:09.848495 kernel: Warning: unable to open an initial console. Sep 9 05:27:09.848503 kernel: Freeing unused kernel image (initmem) memory: 54076K Sep 9 05:27:09.848511 kernel: Write protecting the kernel read-only data: 24576k Sep 9 05:27:09.848522 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 05:27:09.848530 kernel: Run /init as init process Sep 9 05:27:09.848538 kernel: with arguments: Sep 9 05:27:09.848546 kernel: /init Sep 9 05:27:09.848555 kernel: with environment: Sep 9 05:27:09.848563 kernel: HOME=/ Sep 9 05:27:09.848571 kernel: TERM=linux Sep 9 05:27:09.848579 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 05:27:09.848600 systemd[1]: Successfully made /usr/ read-only. Sep 9 05:27:09.848614 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:27:09.848624 systemd[1]: Detected virtualization kvm. Sep 9 05:27:09.848632 systemd[1]: Detected architecture x86-64. Sep 9 05:27:09.848641 systemd[1]: Running in initrd. Sep 9 05:27:09.848649 systemd[1]: No hostname configured, using default hostname. Sep 9 05:27:09.848659 systemd[1]: Hostname set to . Sep 9 05:27:09.848667 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:27:09.848678 systemd[1]: Queued start job for default target initrd.target. Sep 9 05:27:09.848687 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:27:09.848696 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:27:09.848705 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 05:27:09.848714 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:27:09.848723 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 05:27:09.848733 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 05:27:09.848745 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 05:27:09.848754 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 05:27:09.848763 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:27:09.848772 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:27:09.848780 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:27:09.848789 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:27:09.848798 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:27:09.848824 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:27:09.848836 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:27:09.848845 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:27:09.848854 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 05:27:09.848862 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 05:27:09.848871 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:27:09.848880 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:27:09.848889 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:27:09.848898 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:27:09.848906 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 05:27:09.848918 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:27:09.848927 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 05:27:09.848936 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 05:27:09.848945 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 05:27:09.848955 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:27:09.848965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:27:09.848976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:27:09.848985 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 05:27:09.849000 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:27:09.849009 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 05:27:09.849056 systemd-journald[220]: Collecting audit messages is disabled. Sep 9 05:27:09.849080 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 05:27:09.849089 systemd-journald[220]: Journal started Sep 9 05:27:09.849110 systemd-journald[220]: Runtime Journal (/run/log/journal/325fe3cbe53349cda5eb2ec7c3ca0d23) is 6M, max 48.2M, 42.2M free. Sep 9 05:27:09.840653 systemd-modules-load[221]: Inserted module 'overlay' Sep 9 05:27:09.851902 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:27:09.852518 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:27:09.857659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 05:27:09.860501 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:27:09.869825 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 05:27:09.871391 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 9 05:27:09.872597 kernel: Bridge firewalling registered Sep 9 05:27:09.878463 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 05:27:09.880937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:27:09.881479 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:27:09.890072 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 05:27:09.894298 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:27:09.894999 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:27:09.905051 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:27:09.905376 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:27:09.907769 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:27:09.909948 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:27:09.913109 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 05:27:09.938742 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=107bc9be805328e5e30844239fa87d36579f371e3de2c34fec43f6ff6d17b104 Sep 9 05:27:09.962773 systemd-resolved[261]: Positive Trust Anchors: Sep 9 05:27:09.962798 systemd-resolved[261]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:27:09.962841 systemd-resolved[261]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:27:09.965660 systemd-resolved[261]: Defaulting to hostname 'linux'. Sep 9 05:27:09.971398 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:27:09.972520 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:27:10.046855 kernel: SCSI subsystem initialized Sep 9 05:27:10.057837 kernel: Loading iSCSI transport class v2.0-870. Sep 9 05:27:10.070847 kernel: iscsi: registered transport (tcp) Sep 9 05:27:10.092841 kernel: iscsi: registered transport (qla4xxx) Sep 9 05:27:10.092888 kernel: QLogic iSCSI HBA Driver Sep 9 05:27:10.122609 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:27:10.151280 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:27:10.151735 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:27:10.241605 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 05:27:10.244982 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 05:27:10.315843 kernel: raid6: avx2x4 gen() 26811 MB/s Sep 9 05:27:10.332839 kernel: raid6: avx2x2 gen() 25645 MB/s Sep 9 05:27:10.350086 kernel: raid6: avx2x1 gen() 17772 MB/s Sep 9 05:27:10.350111 kernel: raid6: using algorithm avx2x4 gen() 26811 MB/s Sep 9 05:27:10.368117 kernel: raid6: .... xor() 6700 MB/s, rmw enabled Sep 9 05:27:10.368207 kernel: raid6: using avx2x2 recovery algorithm Sep 9 05:27:10.391842 kernel: xor: automatically using best checksumming function avx Sep 9 05:27:10.565843 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 05:27:10.574723 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:27:10.579304 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:27:10.611739 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 9 05:27:10.618909 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:27:10.620225 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 05:27:10.643731 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Sep 9 05:27:10.676880 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:27:10.680554 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:27:10.773781 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:27:10.780048 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 05:27:10.832909 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 05:27:10.837539 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 05:27:10.847814 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 05:27:10.847830 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 05:27:10.847841 kernel: GPT:9289727 != 19775487 Sep 9 05:27:10.847861 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 05:27:10.847883 kernel: GPT:9289727 != 19775487 Sep 9 05:27:10.847894 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 05:27:10.847904 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:27:10.853829 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 05:27:10.853857 kernel: AES CTR mode by8 optimization enabled Sep 9 05:27:10.883855 kernel: libata version 3.00 loaded. Sep 9 05:27:10.886410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:27:10.886604 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:27:10.889738 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:27:10.892093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:27:10.894255 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:27:10.906359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:27:10.908135 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:27:10.914871 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 05:27:10.917831 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 05:27:10.923243 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 05:27:10.923462 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 05:27:10.923617 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 05:27:10.927397 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 05:27:10.931530 kernel: scsi host0: ahci Sep 9 05:27:10.931781 kernel: scsi host1: ahci Sep 9 05:27:10.932326 kernel: scsi host2: ahci Sep 9 05:27:10.932489 kernel: scsi host3: ahci Sep 9 05:27:10.933868 kernel: scsi host4: ahci Sep 9 05:27:10.934132 kernel: scsi host5: ahci Sep 9 05:27:10.934326 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 05:27:10.935863 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 05:27:10.935887 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 05:27:10.937597 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 05:27:10.937619 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 05:27:10.939342 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 05:27:10.954230 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 05:27:10.968515 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:27:10.975300 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 05:27:10.975384 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 05:27:10.981502 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 05:27:10.982306 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:27:11.004300 disk-uuid[633]: Primary Header is updated. Sep 9 05:27:11.004300 disk-uuid[633]: Secondary Entries is updated. Sep 9 05:27:11.004300 disk-uuid[633]: Secondary Header is updated. Sep 9 05:27:11.007826 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:27:11.028971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:27:11.249147 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 05:27:11.249215 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 05:27:11.249240 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 05:27:11.250852 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 05:27:11.250948 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 05:27:11.251846 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 05:27:11.252867 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 05:27:11.252885 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 05:27:11.253946 kernel: ata3.00: applying bridge limits Sep 9 05:27:11.255123 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 05:27:11.255151 kernel: ata3.00: configured for UDMA/100 Sep 9 05:27:11.257831 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 05:27:11.295844 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 05:27:11.296064 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 05:27:11.310016 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 05:27:11.611712 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 05:27:11.612318 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:27:11.615082 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:27:11.617254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:27:11.620389 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 05:27:11.657474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:27:12.017874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 05:27:12.018719 disk-uuid[636]: The operation has completed successfully. Sep 9 05:27:12.048786 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 05:27:12.049016 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 05:27:12.089311 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 05:27:12.121210 sh[667]: Success Sep 9 05:27:12.139837 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 05:27:12.139895 kernel: device-mapper: uevent: version 1.0.3 Sep 9 05:27:12.141837 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 05:27:12.150928 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 05:27:12.181915 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 05:27:12.185564 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 05:27:12.204309 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 05:27:12.209898 kernel: BTRFS: device fsid 9ca60a92-6b53-4529-adc0-1f4392d2ad56 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (679) Sep 9 05:27:12.209958 kernel: BTRFS info (device dm-0): first mount of filesystem 9ca60a92-6b53-4529-adc0-1f4392d2ad56 Sep 9 05:27:12.211833 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:27:12.216837 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 05:27:12.216873 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 05:27:12.218381 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 05:27:12.221113 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:27:12.223711 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 05:27:12.226886 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 05:27:12.229452 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 05:27:12.258476 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 9 05:27:12.258560 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:27:12.258577 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:27:12.263132 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:27:12.263163 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:27:12.270020 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:27:12.270245 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 05:27:12.273637 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 05:27:12.412557 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:27:12.416044 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:27:12.468797 ignition[755]: Ignition 2.22.0 Sep 9 05:27:12.468838 ignition[755]: Stage: fetch-offline Sep 9 05:27:12.468894 ignition[755]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:27:12.468906 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:27:12.469012 ignition[755]: parsed url from cmdline: "" Sep 9 05:27:12.469017 ignition[755]: no config URL provided Sep 9 05:27:12.469024 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 05:27:12.469037 ignition[755]: no config at "/usr/lib/ignition/user.ign" Sep 9 05:27:12.469067 ignition[755]: op(1): [started] loading QEMU firmware config module Sep 9 05:27:12.469074 ignition[755]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 05:27:12.478783 ignition[755]: op(1): [finished] loading QEMU firmware config module Sep 9 05:27:12.482092 systemd-networkd[853]: lo: Link UP Sep 9 05:27:12.482102 systemd-networkd[853]: lo: Gained carrier Sep 9 05:27:12.484175 systemd-networkd[853]: Enumeration completed Sep 9 05:27:12.484625 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:27:12.484630 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:27:12.484914 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:27:12.486120 systemd-networkd[853]: eth0: Link UP Sep 9 05:27:12.486379 systemd-networkd[853]: eth0: Gained carrier Sep 9 05:27:12.486390 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:27:12.486958 systemd[1]: Reached target network.target - Network. Sep 9 05:27:12.501857 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:27:12.531374 ignition[755]: parsing config with SHA512: 154d2271686000780d0be51dae1e718101553bd575c991c32510c7acf6d6d0e9f1bc23d73cb1fb7d6a7062bb945de1c9b191a6673e9b1ae37cabd7bc602aa4ab Sep 9 05:27:12.625596 unknown[755]: fetched base config from "system" Sep 9 05:27:12.625616 unknown[755]: fetched user config from "qemu" Sep 9 05:27:12.626319 ignition[755]: fetch-offline: fetch-offline passed Sep 9 05:27:12.626454 ignition[755]: Ignition finished successfully Sep 9 05:27:12.630162 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:27:12.630485 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 05:27:12.631583 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 05:27:12.678639 ignition[863]: Ignition 2.22.0 Sep 9 05:27:12.678654 ignition[863]: Stage: kargs Sep 9 05:27:12.678830 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:27:12.678845 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:27:12.679921 ignition[863]: kargs: kargs passed Sep 9 05:27:12.680022 ignition[863]: Ignition finished successfully Sep 9 05:27:12.688305 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 05:27:12.690940 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 05:27:12.742696 ignition[871]: Ignition 2.22.0 Sep 9 05:27:12.742712 ignition[871]: Stage: disks Sep 9 05:27:12.742900 ignition[871]: no configs at "/usr/lib/ignition/base.d" Sep 9 05:27:12.742913 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:27:12.744828 ignition[871]: disks: disks passed Sep 9 05:27:12.744892 ignition[871]: Ignition finished successfully Sep 9 05:27:12.748394 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 05:27:12.750942 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 05:27:12.753124 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 05:27:12.753219 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:27:12.753639 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:27:12.754043 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:27:12.755853 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 05:27:12.793792 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 05:27:12.806710 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 05:27:12.808752 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 05:27:12.953827 kernel: EXT4-fs (vda9): mounted filesystem d2d7815e-fa16-4396-ab9d-ac540c1d8856 r/w with ordered data mode. Quota mode: none. Sep 9 05:27:12.954550 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 05:27:12.956251 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 05:27:12.957753 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:27:12.960631 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 05:27:12.961003 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 05:27:12.961051 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 05:27:12.961075 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:27:12.986249 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 05:27:12.989648 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 05:27:12.995473 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Sep 9 05:27:12.995501 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:27:12.995526 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:27:12.997830 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:27:12.997857 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:27:13.000008 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:27:13.048055 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 05:27:13.054197 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Sep 9 05:27:13.059675 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 05:27:13.063862 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 05:27:13.167609 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 05:27:13.170671 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 05:27:13.171701 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 05:27:13.202841 kernel: BTRFS info (device vda6): last unmount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:27:13.209270 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 05:27:13.216023 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 05:27:13.246045 ignition[1004]: INFO : Ignition 2.22.0 Sep 9 05:27:13.246045 ignition[1004]: INFO : Stage: mount Sep 9 05:27:13.248115 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:27:13.248115 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:27:13.251142 ignition[1004]: INFO : mount: mount passed Sep 9 05:27:13.252013 ignition[1004]: INFO : Ignition finished successfully Sep 9 05:27:13.255245 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 05:27:13.256690 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 05:27:13.283464 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 05:27:13.319307 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Sep 9 05:27:13.319359 kernel: BTRFS info (device vda6): first mount of filesystem d4e5a7a8-c50a-463e-827d-ca249a0b8b8b Sep 9 05:27:13.319371 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 05:27:13.323379 kernel: BTRFS info (device vda6): turning on async discard Sep 9 05:27:13.323455 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 05:27:13.325864 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 05:27:13.380816 ignition[1034]: INFO : Ignition 2.22.0 Sep 9 05:27:13.380816 ignition[1034]: INFO : Stage: files Sep 9 05:27:13.382833 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:27:13.382833 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:27:13.382833 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Sep 9 05:27:13.386470 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 05:27:13.386470 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 05:27:13.390100 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 05:27:13.391574 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 05:27:13.393396 unknown[1034]: wrote ssh authorized keys file for user: core Sep 9 05:27:13.394635 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 05:27:13.397261 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 05:27:13.399353 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 05:27:13.444092 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 05:27:14.035934 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 05:27:14.035934 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:27:14.040581 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 05:27:14.121879 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 05:27:14.529901 systemd-networkd[853]: eth0: Gained IPv6LL Sep 9 05:27:14.565778 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 05:27:14.565778 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 05:27:14.571665 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 05:27:14.571665 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:27:14.571665 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 05:27:14.571665 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:27:14.571665 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 05:27:14.571665 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:27:14.571665 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 05:27:14.588295 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:27:14.588295 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 05:27:14.588295 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:27:14.588295 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:27:14.588295 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:27:14.588295 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 05:27:14.992940 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 05:27:15.424915 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 05:27:15.424915 ignition[1034]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 05:27:15.428757 ignition[1034]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:27:15.506872 ignition[1034]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 05:27:15.506872 ignition[1034]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 05:27:15.506872 ignition[1034]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 05:27:15.506872 ignition[1034]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:27:15.514759 ignition[1034]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 05:27:15.514759 ignition[1034]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 05:27:15.514759 ignition[1034]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 05:27:15.531381 ignition[1034]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:27:15.536992 ignition[1034]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 05:27:15.539051 ignition[1034]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 05:27:15.539051 ignition[1034]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 05:27:15.539051 ignition[1034]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 05:27:15.539051 ignition[1034]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:27:15.539051 ignition[1034]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 05:27:15.539051 ignition[1034]: INFO : files: files passed Sep 9 05:27:15.539051 ignition[1034]: INFO : Ignition finished successfully Sep 9 05:27:15.541800 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 05:27:15.547296 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 05:27:15.549844 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 05:27:15.564049 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 05:27:15.564229 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 05:27:15.568577 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 05:27:15.571474 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:27:15.573266 initrd-setup-root-after-ignition[1065]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:27:15.574915 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 05:27:15.574701 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:27:15.576890 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 05:27:15.581033 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 05:27:15.658218 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 05:27:15.658383 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 05:27:15.659577 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 05:27:15.661533 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 05:27:15.663545 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 05:27:15.665366 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 05:27:15.695415 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:27:15.699167 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 05:27:15.727931 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:27:15.746384 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:27:15.748478 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 05:27:15.750428 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 05:27:15.750582 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 05:27:15.752895 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 05:27:15.754363 systemd[1]: Stopped target basic.target - Basic System. Sep 9 05:27:15.756289 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 05:27:15.758235 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 05:27:15.760144 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 05:27:15.762317 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 05:27:15.764450 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 05:27:15.766449 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 05:27:15.768616 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 05:27:15.770504 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 05:27:15.772588 systemd[1]: Stopped target swap.target - Swaps. Sep 9 05:27:15.774265 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 05:27:15.774388 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 05:27:15.776565 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:27:15.777977 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:27:15.779925 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 05:27:15.780093 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:27:15.782033 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 05:27:15.782144 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 05:27:15.784393 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 05:27:15.784518 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 05:27:15.786268 systemd[1]: Stopped target paths.target - Path Units. Sep 9 05:27:15.787968 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 05:27:15.796886 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:27:15.798180 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 05:27:15.800465 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 05:27:15.801352 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 05:27:15.801479 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 05:27:15.803058 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 05:27:15.803171 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 05:27:15.804739 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 05:27:15.804901 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 05:27:15.806452 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 05:27:15.806580 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 05:27:15.810106 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 05:27:15.813001 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 05:27:15.813153 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:27:15.815028 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 05:27:15.817621 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 05:27:15.818683 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:27:15.823461 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 05:27:15.823609 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 05:27:15.831357 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 05:27:15.831982 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 05:27:15.855827 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 05:27:15.863148 ignition[1089]: INFO : Ignition 2.22.0 Sep 9 05:27:15.863148 ignition[1089]: INFO : Stage: umount Sep 9 05:27:15.865178 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 05:27:15.865178 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 05:27:15.865178 ignition[1089]: INFO : umount: umount passed Sep 9 05:27:15.865178 ignition[1089]: INFO : Ignition finished successfully Sep 9 05:27:15.871343 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 05:27:15.871505 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 05:27:15.873648 systemd[1]: Stopped target network.target - Network. Sep 9 05:27:15.874518 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 05:27:15.874589 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 05:27:15.876310 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 05:27:15.876368 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 05:27:15.878309 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 05:27:15.878384 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 05:27:15.878665 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 05:27:15.878721 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 05:27:15.879316 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 05:27:15.883615 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 05:27:15.894420 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 05:27:15.895958 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 05:27:15.899533 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 05:27:15.900069 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 05:27:15.900146 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:27:15.933185 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 05:27:15.933481 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 05:27:15.933604 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 05:27:15.937160 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 05:27:15.937590 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 05:27:15.940467 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 05:27:15.940532 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:27:15.943487 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 05:27:15.943553 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 05:27:15.943602 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 05:27:15.944082 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:27:15.944125 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:27:15.949169 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 05:27:15.949216 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 05:27:15.950132 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:27:15.951261 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 05:27:15.971055 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 05:27:15.973039 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:27:15.976125 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 05:27:15.976188 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 05:27:15.978264 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 05:27:15.978316 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:27:15.980341 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 05:27:15.980459 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 05:27:15.983162 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 05:27:15.983282 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 05:27:15.984712 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 05:27:15.984783 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 05:27:15.990077 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 05:27:15.991424 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 05:27:15.991502 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:27:15.995463 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 05:27:15.995522 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:27:15.999928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:27:15.999982 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:27:16.003534 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 05:27:16.011954 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 05:27:16.019333 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 05:27:16.019461 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 05:27:16.700467 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 05:27:16.700648 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 05:27:16.706224 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 05:27:16.706371 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 05:27:16.706489 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 05:27:16.709932 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 05:27:16.764268 systemd[1]: Switching root. Sep 9 05:27:16.802161 systemd-journald[220]: Journal stopped Sep 9 05:27:18.646962 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 9 05:27:18.647035 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 05:27:18.647050 kernel: SELinux: policy capability open_perms=1 Sep 9 05:27:18.647067 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 05:27:18.647078 kernel: SELinux: policy capability always_check_network=0 Sep 9 05:27:18.647089 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 05:27:18.647101 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 05:27:18.647112 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 05:27:18.647127 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 05:27:18.647138 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 05:27:18.647149 kernel: audit: type=1403 audit(1757395637.684:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 05:27:18.647167 systemd[1]: Successfully loaded SELinux policy in 95.516ms. Sep 9 05:27:18.647182 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.259ms. Sep 9 05:27:18.647195 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 05:27:18.647217 systemd[1]: Detected virtualization kvm. Sep 9 05:27:18.647229 systemd[1]: Detected architecture x86-64. Sep 9 05:27:18.647243 systemd[1]: Detected first boot. Sep 9 05:27:18.647255 systemd[1]: Initializing machine ID from VM UUID. Sep 9 05:27:18.647268 zram_generator::config[1136]: No configuration found. Sep 9 05:27:18.647286 kernel: Guest personality initialized and is inactive Sep 9 05:27:18.647298 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 05:27:18.647309 kernel: Initialized host personality Sep 9 05:27:18.647320 kernel: NET: Registered PF_VSOCK protocol family Sep 9 05:27:18.647331 systemd[1]: Populated /etc with preset unit settings. Sep 9 05:27:18.647344 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 05:27:18.647371 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 05:27:18.647384 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 05:27:18.647396 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 05:27:18.647409 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 05:27:18.647421 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 05:27:18.647435 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 05:27:18.647448 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 05:27:18.647460 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 05:27:18.647474 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 05:27:18.647488 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 05:27:18.647500 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 05:27:18.647516 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 05:27:18.647529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 05:27:18.647543 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 05:27:18.647557 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 05:27:18.647571 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 05:27:18.647586 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 05:27:18.647598 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 05:27:18.647610 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 05:27:18.647623 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 05:27:18.647635 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 05:27:18.647647 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 05:27:18.647659 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 05:27:18.647671 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 05:27:18.647683 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 05:27:18.647700 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 05:27:18.647712 systemd[1]: Reached target slices.target - Slice Units. Sep 9 05:27:18.647724 systemd[1]: Reached target swap.target - Swaps. Sep 9 05:27:18.647736 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 05:27:18.647752 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 05:27:18.647765 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 05:27:18.647777 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 05:27:18.647789 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 05:27:18.647801 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 05:27:18.647841 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 05:27:18.647854 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 05:27:18.647866 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 05:27:18.647878 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 05:27:18.647890 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:27:18.647902 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 05:27:18.647913 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 05:27:18.647925 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 05:27:18.647938 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 05:27:18.647955 systemd[1]: Reached target machines.target - Containers. Sep 9 05:27:18.647967 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 05:27:18.647979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:27:18.647991 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 05:27:18.648003 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 05:27:18.648015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:27:18.648028 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:27:18.648040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:27:18.648057 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 05:27:18.648069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:27:18.648082 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 05:27:18.648094 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 05:27:18.648107 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 05:27:18.648119 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 05:27:18.648277 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 05:27:18.648290 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:27:18.648307 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 05:27:18.648319 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 05:27:18.648331 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 05:27:18.648343 kernel: loop: module loaded Sep 9 05:27:18.648355 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 05:27:18.648375 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 05:27:18.648387 kernel: fuse: init (API version 7.41) Sep 9 05:27:18.648402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 05:27:18.648433 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 05:27:18.648455 systemd[1]: Stopped verity-setup.service. Sep 9 05:27:18.648472 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:27:18.648486 kernel: ACPI: bus type drm_connector registered Sep 9 05:27:18.648497 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 05:27:18.648513 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 05:27:18.648531 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 05:27:18.648548 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 05:27:18.648560 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 05:27:18.648572 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 05:27:18.648584 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 05:27:18.648605 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 05:27:18.648623 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 05:27:18.648635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:27:18.648647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:27:18.648683 systemd-journald[1204]: Collecting audit messages is disabled. Sep 9 05:27:18.648706 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 05:27:18.648718 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:27:18.648730 systemd-journald[1204]: Journal started Sep 9 05:27:18.648757 systemd-journald[1204]: Runtime Journal (/run/log/journal/325fe3cbe53349cda5eb2ec7c3ca0d23) is 6M, max 48.2M, 42.2M free. Sep 9 05:27:18.386975 systemd[1]: Queued start job for default target multi-user.target. Sep 9 05:27:18.407379 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 05:27:18.408023 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 05:27:18.650414 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:27:18.652868 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 05:27:18.654305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:27:18.654564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:27:18.656093 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 05:27:18.656311 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 05:27:18.657650 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:27:18.657912 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:27:18.659342 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 05:27:18.661155 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 05:27:18.662725 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 05:27:18.664327 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 05:27:18.679998 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 05:27:18.682709 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 05:27:18.684815 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 05:27:18.685932 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 05:27:18.685959 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 05:27:18.687956 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 05:27:18.698046 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 05:27:18.699339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:27:18.701196 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 05:27:18.704699 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 05:27:18.706967 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:27:18.708406 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 05:27:18.709748 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:27:18.712509 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:27:18.715068 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 05:27:18.719678 systemd-journald[1204]: Time spent on flushing to /var/log/journal/325fe3cbe53349cda5eb2ec7c3ca0d23 is 31.390ms for 1042 entries. Sep 9 05:27:18.719678 systemd-journald[1204]: System Journal (/var/log/journal/325fe3cbe53349cda5eb2ec7c3ca0d23) is 8M, max 195.6M, 187.6M free. Sep 9 05:27:18.755565 systemd-journald[1204]: Received client request to flush runtime journal. Sep 9 05:27:18.755614 kernel: loop0: detected capacity change from 0 to 128016 Sep 9 05:27:18.726798 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 05:27:18.730430 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 05:27:18.732065 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 05:27:18.748868 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 05:27:18.752052 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 05:27:18.755394 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 05:27:18.757206 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 05:27:18.759140 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 05:27:18.761125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:27:18.784834 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 05:27:18.784894 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 05:27:18.791983 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 05:27:18.807834 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 05:27:18.814895 kernel: loop1: detected capacity change from 0 to 110984 Sep 9 05:27:18.836493 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 9 05:27:18.836514 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Sep 9 05:27:18.847744 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 05:27:18.857832 kernel: loop2: detected capacity change from 0 to 221472 Sep 9 05:27:18.893839 kernel: loop3: detected capacity change from 0 to 128016 Sep 9 05:27:18.907074 kernel: loop4: detected capacity change from 0 to 110984 Sep 9 05:27:18.915835 kernel: loop5: detected capacity change from 0 to 221472 Sep 9 05:27:18.925409 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 05:27:18.926242 (sd-merge)[1279]: Merged extensions into '/usr'. Sep 9 05:27:18.942787 systemd[1]: Reload requested from client PID 1255 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 05:27:18.942822 systemd[1]: Reloading... Sep 9 05:27:19.047872 zram_generator::config[1308]: No configuration found. Sep 9 05:27:19.231109 ldconfig[1250]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 05:27:19.312996 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 05:27:19.313265 systemd[1]: Reloading finished in 369 ms. Sep 9 05:27:19.446079 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 05:27:19.448297 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 05:27:19.466977 systemd[1]: Starting ensure-sysext.service... Sep 9 05:27:19.469642 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 05:27:19.483939 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Sep 9 05:27:19.483964 systemd[1]: Reloading... Sep 9 05:27:19.558929 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 05:27:19.558984 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 05:27:19.559378 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 05:27:19.559666 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 05:27:19.560785 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 05:27:19.561164 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 9 05:27:19.561300 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Sep 9 05:27:19.568460 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:27:19.569149 systemd-tmpfiles[1343]: Skipping /boot Sep 9 05:27:19.586385 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 05:27:19.586551 systemd-tmpfiles[1343]: Skipping /boot Sep 9 05:27:19.602840 zram_generator::config[1373]: No configuration found. Sep 9 05:27:19.880844 systemd[1]: Reloading finished in 396 ms. Sep 9 05:27:19.898495 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 05:27:19.930378 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 05:27:19.941073 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:27:19.943891 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 05:27:19.946677 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 05:27:19.960875 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 05:27:19.965649 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 05:27:19.973142 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 05:27:19.978390 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:27:19.978633 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:27:19.980659 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 05:27:19.985049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:27:19.990925 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:27:19.994119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:27:19.994241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:27:19.999040 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 05:27:20.000091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:27:20.001962 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:27:20.002181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:27:20.008950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:27:20.009182 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:27:20.017066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 05:27:20.017365 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 05:27:20.019253 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 05:27:20.025023 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:27:20.025301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 05:27:20.027216 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 05:27:20.030306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 05:27:20.032785 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 05:27:20.035114 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 05:27:20.035167 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 05:27:20.042040 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 05:27:20.043523 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 05:27:20.044277 systemd[1]: Finished ensure-sysext.service. Sep 9 05:27:20.050152 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 05:27:20.050431 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 05:27:20.052177 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 05:27:20.052447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 05:27:20.054445 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 05:27:20.054756 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 05:27:20.063132 augenrules[1448]: No rules Sep 9 05:27:20.063506 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 05:27:20.065431 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:27:20.065776 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:27:20.067876 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 05:27:20.073385 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Sep 9 05:27:20.073568 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 05:27:20.073673 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 05:27:20.076363 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 05:27:20.079392 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 05:27:20.104187 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 05:27:20.106543 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 05:27:20.115054 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 05:27:20.122988 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 05:27:20.202678 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 05:27:20.321948 systemd-resolved[1412]: Positive Trust Anchors: Sep 9 05:27:20.321974 systemd-resolved[1412]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 05:27:20.322017 systemd-resolved[1412]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 05:27:20.326523 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 05:27:20.329547 systemd-resolved[1412]: Defaulting to hostname 'linux'. Sep 9 05:27:20.329638 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 05:27:20.331650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 05:27:20.333004 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 05:27:20.336832 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 05:27:20.346676 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 05:27:20.348444 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 05:27:20.350078 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 05:27:20.351609 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 05:27:20.353108 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 05:27:20.356044 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 05:27:20.357541 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 05:27:20.357574 systemd[1]: Reached target paths.target - Path Units. Sep 9 05:27:20.358690 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 05:27:20.360166 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 05:27:20.361625 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 05:27:20.363104 systemd[1]: Reached target timers.target - Timer Units. Sep 9 05:27:20.364943 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 05:27:20.368075 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 05:27:20.366717 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 05:27:20.369447 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 05:27:20.369680 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 05:27:20.370152 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 05:27:20.378876 kernel: ACPI: button: Power Button [PWRF] Sep 9 05:27:20.379690 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 05:27:20.381121 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 05:27:20.384188 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 05:27:20.385971 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 05:27:20.390603 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 05:27:20.392134 systemd[1]: Reached target basic.target - Basic System. Sep 9 05:27:20.393437 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:27:20.393485 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 05:27:20.398020 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 05:27:20.401073 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 05:27:20.403753 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 05:27:20.414028 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 05:27:20.415973 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 05:27:20.422065 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 05:27:20.425821 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 05:27:20.428822 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 05:27:20.429078 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 05:27:20.429882 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 05:27:20.436765 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Sep 9 05:27:20.440297 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Sep 9 05:27:20.440297 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Sep 9 05:27:20.440297 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:27:20.440297 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Sep 9 05:27:20.440297 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Sep 9 05:27:20.440297 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:27:20.435438 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 05:27:20.439013 oslogin_cache_refresh[1519]: Failure getting users, quitting Sep 9 05:27:20.448003 jq[1513]: false Sep 9 05:27:20.439028 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 05:27:20.439073 oslogin_cache_refresh[1519]: Refreshing group entry cache Sep 9 05:27:20.439516 oslogin_cache_refresh[1519]: Failure getting groups, quitting Sep 9 05:27:20.439526 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 05:27:20.453999 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 05:27:20.458284 extend-filesystems[1518]: Found /dev/vda6 Sep 9 05:27:20.459483 extend-filesystems[1518]: Found /dev/vda9 Sep 9 05:27:20.466882 extend-filesystems[1518]: Checking size of /dev/vda9 Sep 9 05:27:20.460946 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 05:27:20.462186 systemd-networkd[1472]: lo: Link UP Sep 9 05:27:20.462190 systemd-networkd[1472]: lo: Gained carrier Sep 9 05:27:20.466327 systemd-networkd[1472]: Enumeration completed Sep 9 05:27:20.614953 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:27:20.614958 systemd-networkd[1472]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 05:27:20.615130 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 05:27:20.616917 systemd-networkd[1472]: eth0: Link UP Sep 9 05:27:20.617911 systemd-networkd[1472]: eth0: Gained carrier Sep 9 05:27:20.617925 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 05:27:20.617931 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 05:27:20.619972 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 05:27:20.622471 extend-filesystems[1518]: Resized partition /dev/vda9 Sep 9 05:27:20.623934 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 05:27:20.629164 extend-filesystems[1544]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 05:27:20.630862 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 05:27:20.633451 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 05:27:20.633921 systemd-networkd[1472]: eth0: DHCPv4 address 10.0.0.16/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 05:27:20.635216 systemd-timesyncd[1458]: Network configuration changed, trying to establish connection. Sep 9 05:27:20.635869 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 05:27:21.474121 systemd-timesyncd[1458]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 05:27:21.474166 systemd-timesyncd[1458]: Initial clock synchronization to Tue 2025-09-09 05:27:21.474034 UTC. Sep 9 05:27:21.474934 systemd-resolved[1412]: Clock change detected. Flushing caches. Sep 9 05:27:21.476288 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 05:27:21.476954 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 05:27:21.477424 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 05:27:21.477756 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 05:27:21.492948 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 05:27:21.492318 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 05:27:21.544443 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 05:27:21.548449 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 05:27:21.550445 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 05:27:21.555817 jq[1546]: true Sep 9 05:27:21.558986 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 05:27:21.580993 update_engine[1542]: I20250909 05:27:21.577853 1542 main.cc:92] Flatcar Update Engine starting Sep 9 05:27:21.593833 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 05:27:21.593833 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 05:27:21.593833 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 05:27:21.601229 extend-filesystems[1518]: Resized filesystem in /dev/vda9 Sep 9 05:27:21.602621 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 05:27:21.606043 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 05:27:21.617443 jq[1558]: true Sep 9 05:27:21.630328 kernel: kvm_amd: TSC scaling supported Sep 9 05:27:21.630438 kernel: kvm_amd: Nested Virtualization enabled Sep 9 05:27:21.630465 kernel: kvm_amd: Nested Paging enabled Sep 9 05:27:21.630487 kernel: kvm_amd: LBR virtualization supported Sep 9 05:27:21.631010 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 05:27:21.632187 kernel: kvm_amd: Virtual GIF supported Sep 9 05:27:21.640963 tar[1553]: linux-amd64/helm Sep 9 05:27:21.645990 systemd[1]: Reached target network.target - Network. Sep 9 05:27:21.650063 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 05:27:21.654772 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 05:27:21.655057 dbus-daemon[1509]: [system] SELinux support is enabled Sep 9 05:27:21.658494 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 05:27:21.661870 update_engine[1542]: I20250909 05:27:21.661813 1542 update_check_scheduler.cc:74] Next update check in 10m27s Sep 9 05:27:21.662332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:27:21.665084 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 05:27:21.750146 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 05:27:21.750460 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 05:27:21.775728 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 05:27:21.776034 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 05:27:21.781472 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 05:27:21.781806 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:27:21.790453 systemd[1]: Started update-engine.service - Update Engine. Sep 9 05:27:21.794263 systemd-logind[1533]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 05:27:21.795170 systemd-logind[1533]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 05:27:21.795807 systemd-logind[1533]: New seat seat0. Sep 9 05:27:21.802352 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 05:27:21.807907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 05:27:21.811241 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 05:27:21.813892 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 05:27:21.865300 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 05:27:21.895979 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 05:27:21.932607 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 05:27:21.959413 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 05:27:21.959850 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 05:27:21.966042 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 05:27:22.061245 kernel: EDAC MC: Ver: 3.0.0 Sep 9 05:27:22.064126 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 05:27:22.073309 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 05:27:22.075989 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 05:27:22.078157 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 05:27:22.078362 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 05:27:22.124300 locksmithd[1595]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 05:27:22.146148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 05:27:22.245701 tar[1553]: linux-amd64/LICENSE Sep 9 05:27:22.245701 tar[1553]: linux-amd64/README.md Sep 9 05:27:22.281870 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 05:27:22.463504 containerd[1591]: time="2025-09-09T05:27:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 05:27:22.464310 containerd[1591]: time="2025-09-09T05:27:22.464265573Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 05:27:22.479401 containerd[1591]: time="2025-09-09T05:27:22.479306105Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.623µs" Sep 9 05:27:22.479401 containerd[1591]: time="2025-09-09T05:27:22.479367480Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 05:27:22.479401 containerd[1591]: time="2025-09-09T05:27:22.479399610Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 05:27:22.479706 containerd[1591]: time="2025-09-09T05:27:22.479670959Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 05:27:22.479706 containerd[1591]: time="2025-09-09T05:27:22.479692680Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 05:27:22.479767 containerd[1591]: time="2025-09-09T05:27:22.479740941Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:27:22.479852 containerd[1591]: time="2025-09-09T05:27:22.479822363Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 05:27:22.479852 containerd[1591]: time="2025-09-09T05:27:22.479839906Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:27:22.480263 containerd[1591]: time="2025-09-09T05:27:22.480221462Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 05:27:22.480263 containerd[1591]: time="2025-09-09T05:27:22.480248673Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:27:22.480333 containerd[1591]: time="2025-09-09T05:27:22.480265244Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 05:27:22.480333 containerd[1591]: time="2025-09-09T05:27:22.480273169Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 05:27:22.480433 containerd[1591]: time="2025-09-09T05:27:22.480371103Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 05:27:22.480899 containerd[1591]: time="2025-09-09T05:27:22.480839652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:27:22.480954 containerd[1591]: time="2025-09-09T05:27:22.480937535Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 05:27:22.480954 containerd[1591]: time="2025-09-09T05:27:22.480951582Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 05:27:22.481046 containerd[1591]: time="2025-09-09T05:27:22.481023807Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 05:27:22.481526 containerd[1591]: time="2025-09-09T05:27:22.481443454Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 05:27:22.481720 containerd[1591]: time="2025-09-09T05:27:22.481680649Z" level=info msg="metadata content store policy set" policy=shared Sep 9 05:27:22.808166 systemd-networkd[1472]: eth0: Gained IPv6LL Sep 9 05:27:22.812004 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 05:27:22.814015 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 05:27:22.817351 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 05:27:22.820126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:27:22.822510 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 05:27:22.869245 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 05:27:22.895176 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Sep 9 05:27:22.895984 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 05:27:22.897700 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 05:27:22.897971 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 05:27:22.901039 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 05:27:22.901602 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 05:27:23.280161 containerd[1591]: time="2025-09-09T05:27:23.279983257Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 05:27:23.280161 containerd[1591]: time="2025-09-09T05:27:23.280130152Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 05:27:23.280161 containerd[1591]: time="2025-09-09T05:27:23.280162192Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 05:27:23.280363 containerd[1591]: time="2025-09-09T05:27:23.280190315Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 05:27:23.280363 containerd[1591]: time="2025-09-09T05:27:23.280205373Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 05:27:23.280363 containerd[1591]: time="2025-09-09T05:27:23.280218408Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 05:27:23.280363 containerd[1591]: time="2025-09-09T05:27:23.280238525Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 05:27:23.280363 containerd[1591]: time="2025-09-09T05:27:23.280253974Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 05:27:23.280363 containerd[1591]: time="2025-09-09T05:27:23.280274012Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 05:27:23.280363 containerd[1591]: time="2025-09-09T05:27:23.280286435Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 05:27:23.280363 containerd[1591]: time="2025-09-09T05:27:23.280298798Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 05:27:23.280363 containerd[1591]: time="2025-09-09T05:27:23.280341599Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 05:27:23.280656 containerd[1591]: time="2025-09-09T05:27:23.280626273Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 05:27:23.280688 containerd[1591]: time="2025-09-09T05:27:23.280672199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 05:27:23.280715 containerd[1591]: time="2025-09-09T05:27:23.280697707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 05:27:23.280739 containerd[1591]: time="2025-09-09T05:27:23.280714308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 05:27:23.280739 containerd[1591]: time="2025-09-09T05:27:23.280728024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 05:27:23.280785 containerd[1591]: time="2025-09-09T05:27:23.280743803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 05:27:23.280785 containerd[1591]: time="2025-09-09T05:27:23.280757599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 05:27:23.280785 containerd[1591]: time="2025-09-09T05:27:23.280772948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 05:27:23.280868 containerd[1591]: time="2025-09-09T05:27:23.280786313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 05:27:23.280868 containerd[1591]: time="2025-09-09T05:27:23.280805028Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 05:27:23.280868 containerd[1591]: time="2025-09-09T05:27:23.280819024Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 05:27:23.281036 containerd[1591]: time="2025-09-09T05:27:23.281005865Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 05:27:23.281084 containerd[1591]: time="2025-09-09T05:27:23.281043395Z" level=info msg="Start snapshots syncer" Sep 9 05:27:23.281112 containerd[1591]: time="2025-09-09T05:27:23.281085133Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 05:27:23.281537 containerd[1591]: time="2025-09-09T05:27:23.281484843Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 05:27:23.281810 containerd[1591]: time="2025-09-09T05:27:23.281560886Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 05:27:23.283639 containerd[1591]: time="2025-09-09T05:27:23.283602585Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 05:27:23.283770 containerd[1591]: time="2025-09-09T05:27:23.283742317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 05:27:23.283811 containerd[1591]: time="2025-09-09T05:27:23.283773486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 05:27:23.283838 containerd[1591]: time="2025-09-09T05:27:23.283811157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 05:27:23.283838 containerd[1591]: time="2025-09-09T05:27:23.283826816Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 05:27:23.283885 containerd[1591]: time="2025-09-09T05:27:23.283854728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 05:27:23.283885 containerd[1591]: time="2025-09-09T05:27:23.283870568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 05:27:23.283962 containerd[1591]: time="2025-09-09T05:27:23.283886738Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 05:27:23.283962 containerd[1591]: time="2025-09-09T05:27:23.283935911Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 05:27:23.283962 containerd[1591]: time="2025-09-09T05:27:23.283952502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 05:27:23.284033 containerd[1591]: time="2025-09-09T05:27:23.283965105Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 05:27:23.284033 containerd[1591]: time="2025-09-09T05:27:23.284014779Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:27:23.284117 containerd[1591]: time="2025-09-09T05:27:23.284039375Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 05:27:23.284117 containerd[1591]: time="2025-09-09T05:27:23.284052309Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:27:23.284117 containerd[1591]: time="2025-09-09T05:27:23.284069411Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 05:27:23.284117 containerd[1591]: time="2025-09-09T05:27:23.284087735Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 05:27:23.284117 containerd[1591]: time="2025-09-09T05:27:23.284100690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 05:27:23.284248 containerd[1591]: time="2025-09-09T05:27:23.284120136Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 05:27:23.284248 containerd[1591]: time="2025-09-09T05:27:23.284154180Z" level=info msg="runtime interface created" Sep 9 05:27:23.284248 containerd[1591]: time="2025-09-09T05:27:23.284162355Z" level=info msg="created NRI interface" Sep 9 05:27:23.284248 containerd[1591]: time="2025-09-09T05:27:23.284192482Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 05:27:23.284248 containerd[1591]: time="2025-09-09T05:27:23.284223961Z" level=info msg="Connect containerd service" Sep 9 05:27:23.284380 containerd[1591]: time="2025-09-09T05:27:23.284260880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 05:27:23.285514 containerd[1591]: time="2025-09-09T05:27:23.285470329Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:27:23.436182 containerd[1591]: time="2025-09-09T05:27:23.436087273Z" level=info msg="Start subscribing containerd event" Sep 9 05:27:23.436951 containerd[1591]: time="2025-09-09T05:27:23.436220162Z" level=info msg="Start recovering state" Sep 9 05:27:23.436951 containerd[1591]: time="2025-09-09T05:27:23.436418074Z" level=info msg="Start event monitor" Sep 9 05:27:23.436951 containerd[1591]: time="2025-09-09T05:27:23.436458990Z" level=info msg="Start cni network conf syncer for default" Sep 9 05:27:23.436951 containerd[1591]: time="2025-09-09T05:27:23.436474469Z" level=info msg="Start streaming server" Sep 9 05:27:23.436951 containerd[1591]: time="2025-09-09T05:27:23.436493976Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 05:27:23.436951 containerd[1591]: time="2025-09-09T05:27:23.436507010Z" level=info msg="runtime interface starting up..." Sep 9 05:27:23.436951 containerd[1591]: time="2025-09-09T05:27:23.436517490Z" level=info msg="starting plugins..." Sep 9 05:27:23.436951 containerd[1591]: time="2025-09-09T05:27:23.436548598Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 05:27:23.436951 containerd[1591]: time="2025-09-09T05:27:23.436764263Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 05:27:23.436951 containerd[1591]: time="2025-09-09T05:27:23.436869280Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 05:27:23.437375 containerd[1591]: time="2025-09-09T05:27:23.437335104Z" level=info msg="containerd successfully booted in 0.974816s" Sep 9 05:27:23.437505 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 05:27:23.668858 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 05:27:23.671895 systemd[1]: Started sshd@0-10.0.0.16:22-10.0.0.1:35044.service - OpenSSH per-connection server daemon (10.0.0.1:35044). Sep 9 05:27:23.789330 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 35044 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:27:23.792326 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:27:23.801905 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 05:27:23.805456 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 05:27:23.815368 systemd-logind[1533]: New session 1 of user core. Sep 9 05:27:23.830086 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 05:27:23.835024 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 05:27:23.851109 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 05:27:23.853880 systemd-logind[1533]: New session c1 of user core. Sep 9 05:27:24.025481 systemd[1674]: Queued start job for default target default.target. Sep 9 05:27:24.047005 systemd[1674]: Created slice app.slice - User Application Slice. Sep 9 05:27:24.047043 systemd[1674]: Reached target paths.target - Paths. Sep 9 05:27:24.047105 systemd[1674]: Reached target timers.target - Timers. Sep 9 05:27:24.049087 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 05:27:24.067167 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 05:27:24.067338 systemd[1674]: Reached target sockets.target - Sockets. Sep 9 05:27:24.067401 systemd[1674]: Reached target basic.target - Basic System. Sep 9 05:27:24.067450 systemd[1674]: Reached target default.target - Main User Target. Sep 9 05:27:24.067500 systemd[1674]: Startup finished in 205ms. Sep 9 05:27:24.067985 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 05:27:24.071497 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 05:27:24.138140 systemd[1]: Started sshd@1-10.0.0.16:22-10.0.0.1:35058.service - OpenSSH per-connection server daemon (10.0.0.1:35058). Sep 9 05:27:24.201445 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 35058 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:27:24.203825 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:27:24.210275 systemd-logind[1533]: New session 2 of user core. Sep 9 05:27:24.224137 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 05:27:24.287320 sshd[1688]: Connection closed by 10.0.0.1 port 35058 Sep 9 05:27:24.289073 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Sep 9 05:27:24.301997 systemd[1]: sshd@1-10.0.0.16:22-10.0.0.1:35058.service: Deactivated successfully. Sep 9 05:27:24.305617 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 05:27:24.306942 systemd-logind[1533]: Session 2 logged out. Waiting for processes to exit. Sep 9 05:27:24.312133 systemd[1]: Started sshd@2-10.0.0.16:22-10.0.0.1:35060.service - OpenSSH per-connection server daemon (10.0.0.1:35060). Sep 9 05:27:24.314790 systemd-logind[1533]: Removed session 2. Sep 9 05:27:24.386757 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 35060 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:27:24.389355 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:27:24.396145 systemd-logind[1533]: New session 3 of user core. Sep 9 05:27:24.411179 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 05:27:24.493961 sshd[1697]: Connection closed by 10.0.0.1 port 35060 Sep 9 05:27:24.494413 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 9 05:27:24.499310 systemd[1]: sshd@2-10.0.0.16:22-10.0.0.1:35060.service: Deactivated successfully. Sep 9 05:27:24.501689 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 05:27:24.502671 systemd-logind[1533]: Session 3 logged out. Waiting for processes to exit. Sep 9 05:27:24.504139 systemd-logind[1533]: Removed session 3. Sep 9 05:27:24.720823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:27:24.722832 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 05:27:24.724596 systemd[1]: Startup finished in 4.009s (kernel) + 8.040s (initrd) + 6.262s (userspace) = 18.311s. Sep 9 05:27:24.731687 (kubelet)[1707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:27:25.558275 kubelet[1707]: E0909 05:27:25.558170 1707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:27:25.562786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:27:25.563031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:27:25.563473 systemd[1]: kubelet.service: Consumed 2.161s CPU time, 265.8M memory peak. Sep 9 05:27:34.511392 systemd[1]: Started sshd@3-10.0.0.16:22-10.0.0.1:56770.service - OpenSSH per-connection server daemon (10.0.0.1:56770). Sep 9 05:27:34.581355 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 56770 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:27:34.583266 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:27:34.588523 systemd-logind[1533]: New session 4 of user core. Sep 9 05:27:34.598059 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 05:27:34.651769 sshd[1723]: Connection closed by 10.0.0.1 port 56770 Sep 9 05:27:34.652237 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Sep 9 05:27:34.667076 systemd[1]: sshd@3-10.0.0.16:22-10.0.0.1:56770.service: Deactivated successfully. Sep 9 05:27:34.669073 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 05:27:34.669797 systemd-logind[1533]: Session 4 logged out. Waiting for processes to exit. Sep 9 05:27:34.672633 systemd[1]: Started sshd@4-10.0.0.16:22-10.0.0.1:56786.service - OpenSSH per-connection server daemon (10.0.0.1:56786). Sep 9 05:27:34.673481 systemd-logind[1533]: Removed session 4. Sep 9 05:27:34.734795 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 56786 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:27:34.736734 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:27:34.741571 systemd-logind[1533]: New session 5 of user core. Sep 9 05:27:34.751149 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 05:27:34.803687 sshd[1732]: Connection closed by 10.0.0.1 port 56786 Sep 9 05:27:34.804332 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Sep 9 05:27:34.817970 systemd[1]: sshd@4-10.0.0.16:22-10.0.0.1:56786.service: Deactivated successfully. Sep 9 05:27:34.819869 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 05:27:34.820692 systemd-logind[1533]: Session 5 logged out. Waiting for processes to exit. Sep 9 05:27:34.823480 systemd[1]: Started sshd@5-10.0.0.16:22-10.0.0.1:56792.service - OpenSSH per-connection server daemon (10.0.0.1:56792). Sep 9 05:27:34.824275 systemd-logind[1533]: Removed session 5. Sep 9 05:27:34.883270 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 56792 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:27:34.884795 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:27:34.889454 systemd-logind[1533]: New session 6 of user core. Sep 9 05:27:34.900119 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 05:27:34.954677 sshd[1741]: Connection closed by 10.0.0.1 port 56792 Sep 9 05:27:34.955130 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Sep 9 05:27:34.971979 systemd[1]: sshd@5-10.0.0.16:22-10.0.0.1:56792.service: Deactivated successfully. Sep 9 05:27:34.974071 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 05:27:34.974882 systemd-logind[1533]: Session 6 logged out. Waiting for processes to exit. Sep 9 05:27:34.977748 systemd[1]: Started sshd@6-10.0.0.16:22-10.0.0.1:56808.service - OpenSSH per-connection server daemon (10.0.0.1:56808). Sep 9 05:27:34.978531 systemd-logind[1533]: Removed session 6. Sep 9 05:27:35.045711 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 56808 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:27:35.047576 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:27:35.052561 systemd-logind[1533]: New session 7 of user core. Sep 9 05:27:35.063043 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 05:27:35.124752 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 05:27:35.125166 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:27:35.259068 sudo[1751]: pam_unix(sudo:session): session closed for user root Sep 9 05:27:35.261014 sshd[1750]: Connection closed by 10.0.0.1 port 56808 Sep 9 05:27:35.261655 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Sep 9 05:27:35.272654 systemd[1]: sshd@6-10.0.0.16:22-10.0.0.1:56808.service: Deactivated successfully. Sep 9 05:27:35.274835 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 05:27:35.275739 systemd-logind[1533]: Session 7 logged out. Waiting for processes to exit. Sep 9 05:27:35.278958 systemd[1]: Started sshd@7-10.0.0.16:22-10.0.0.1:56818.service - OpenSSH per-connection server daemon (10.0.0.1:56818). Sep 9 05:27:35.279785 systemd-logind[1533]: Removed session 7. Sep 9 05:27:35.338300 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 56818 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:27:35.339644 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:27:35.344487 systemd-logind[1533]: New session 8 of user core. Sep 9 05:27:35.354083 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 05:27:35.408374 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 05:27:35.408684 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:27:35.461352 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 9 05:27:35.468366 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 05:27:35.468748 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:27:35.478843 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 05:27:35.533348 augenrules[1784]: No rules Sep 9 05:27:35.535033 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 05:27:35.535398 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 05:27:35.536886 sudo[1761]: pam_unix(sudo:session): session closed for user root Sep 9 05:27:35.538567 sshd[1760]: Connection closed by 10.0.0.1 port 56818 Sep 9 05:27:35.538902 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Sep 9 05:27:35.550119 systemd[1]: sshd@7-10.0.0.16:22-10.0.0.1:56818.service: Deactivated successfully. Sep 9 05:27:35.552030 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 05:27:35.552844 systemd-logind[1533]: Session 8 logged out. Waiting for processes to exit. Sep 9 05:27:35.555706 systemd[1]: Started sshd@8-10.0.0.16:22-10.0.0.1:56824.service - OpenSSH per-connection server daemon (10.0.0.1:56824). Sep 9 05:27:35.556619 systemd-logind[1533]: Removed session 8. Sep 9 05:27:35.576312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 05:27:35.577856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:27:35.617380 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 56824 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:27:35.619360 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:27:35.624258 systemd-logind[1533]: New session 9 of user core. Sep 9 05:27:35.634078 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 05:27:35.688283 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 05:27:35.688631 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 05:27:35.824833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:27:35.842441 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:27:35.939785 kubelet[1815]: E0909 05:27:35.939541 1815 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:27:35.968128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:27:35.968365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:27:35.968923 systemd[1]: kubelet.service: Consumed 338ms CPU time, 110.8M memory peak. Sep 9 05:27:36.284936 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 05:27:36.307470 (dockerd)[1835]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 05:27:36.803366 dockerd[1835]: time="2025-09-09T05:27:36.803274335Z" level=info msg="Starting up" Sep 9 05:27:36.804399 dockerd[1835]: time="2025-09-09T05:27:36.804369209Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 05:27:36.880944 dockerd[1835]: time="2025-09-09T05:27:36.880853942Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 05:27:37.286538 dockerd[1835]: time="2025-09-09T05:27:37.286459653Z" level=info msg="Loading containers: start." Sep 9 05:27:37.299954 kernel: Initializing XFRM netlink socket Sep 9 05:27:37.660208 systemd-networkd[1472]: docker0: Link UP Sep 9 05:27:37.670255 dockerd[1835]: time="2025-09-09T05:27:37.670170124Z" level=info msg="Loading containers: done." Sep 9 05:27:37.694176 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3361444618-merged.mount: Deactivated successfully. Sep 9 05:27:37.698350 dockerd[1835]: time="2025-09-09T05:27:37.698282194Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 05:27:37.698462 dockerd[1835]: time="2025-09-09T05:27:37.698424912Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 05:27:37.698575 dockerd[1835]: time="2025-09-09T05:27:37.698549285Z" level=info msg="Initializing buildkit" Sep 9 05:27:37.763154 dockerd[1835]: time="2025-09-09T05:27:37.763064668Z" level=info msg="Completed buildkit initialization" Sep 9 05:27:37.771614 dockerd[1835]: time="2025-09-09T05:27:37.771566404Z" level=info msg="Daemon has completed initialization" Sep 9 05:27:37.771933 dockerd[1835]: time="2025-09-09T05:27:37.771831211Z" level=info msg="API listen on /run/docker.sock" Sep 9 05:27:37.771974 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 05:27:39.283267 containerd[1591]: time="2025-09-09T05:27:39.283201820Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 05:27:41.202434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2939804911.mount: Deactivated successfully. Sep 9 05:27:42.852866 containerd[1591]: time="2025-09-09T05:27:42.852783535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:42.853602 containerd[1591]: time="2025-09-09T05:27:42.853546516Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 9 05:27:42.855245 containerd[1591]: time="2025-09-09T05:27:42.855119106Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:42.901173 containerd[1591]: time="2025-09-09T05:27:42.901111595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:42.902242 containerd[1591]: time="2025-09-09T05:27:42.902189707Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 3.618935268s" Sep 9 05:27:42.902242 containerd[1591]: time="2025-09-09T05:27:42.902227528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 05:27:42.905908 containerd[1591]: time="2025-09-09T05:27:42.905852117Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 05:27:44.435188 containerd[1591]: time="2025-09-09T05:27:44.435060705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:44.436043 containerd[1591]: time="2025-09-09T05:27:44.435943280Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 9 05:27:44.437950 containerd[1591]: time="2025-09-09T05:27:44.437878560Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:44.442005 containerd[1591]: time="2025-09-09T05:27:44.441905895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:44.443440 containerd[1591]: time="2025-09-09T05:27:44.443355133Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.53745678s" Sep 9 05:27:44.443440 containerd[1591]: time="2025-09-09T05:27:44.443422820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 05:27:44.444348 containerd[1591]: time="2025-09-09T05:27:44.444294986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 05:27:46.194336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 05:27:46.196466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:27:46.485385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:27:46.490312 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:27:46.595352 kubelet[2120]: E0909 05:27:46.595257 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:27:46.601596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:27:46.601819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:27:46.602459 systemd[1]: kubelet.service: Consumed 279ms CPU time, 111.1M memory peak. Sep 9 05:27:47.218954 containerd[1591]: time="2025-09-09T05:27:47.218073171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:47.241634 containerd[1591]: time="2025-09-09T05:27:47.241550037Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 9 05:27:47.292820 containerd[1591]: time="2025-09-09T05:27:47.292735696Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:47.313090 containerd[1591]: time="2025-09-09T05:27:47.312976372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:47.313867 containerd[1591]: time="2025-09-09T05:27:47.313826517Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 2.869478932s" Sep 9 05:27:47.313942 containerd[1591]: time="2025-09-09T05:27:47.313872042Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 05:27:47.314603 containerd[1591]: time="2025-09-09T05:27:47.314575642Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 05:27:49.621683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814057143.mount: Deactivated successfully. Sep 9 05:27:50.207650 containerd[1591]: time="2025-09-09T05:27:50.207539916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:50.208405 containerd[1591]: time="2025-09-09T05:27:50.208366657Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 9 05:27:50.210277 containerd[1591]: time="2025-09-09T05:27:50.210184787Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:50.212630 containerd[1591]: time="2025-09-09T05:27:50.212579148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:50.213685 containerd[1591]: time="2025-09-09T05:27:50.213592389Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.89898134s" Sep 9 05:27:50.213685 containerd[1591]: time="2025-09-09T05:27:50.213676216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 05:27:50.214587 containerd[1591]: time="2025-09-09T05:27:50.214543813Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 05:27:51.067687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1741962617.mount: Deactivated successfully. Sep 9 05:27:52.890306 containerd[1591]: time="2025-09-09T05:27:52.890224365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:52.892174 containerd[1591]: time="2025-09-09T05:27:52.892077141Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 05:27:52.893935 containerd[1591]: time="2025-09-09T05:27:52.893842542Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:52.898351 containerd[1591]: time="2025-09-09T05:27:52.898265869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:52.899236 containerd[1591]: time="2025-09-09T05:27:52.899163613Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.684578552s" Sep 9 05:27:52.899236 containerd[1591]: time="2025-09-09T05:27:52.899224507Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 05:27:52.900055 containerd[1591]: time="2025-09-09T05:27:52.899999491Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 05:27:53.440281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472012724.mount: Deactivated successfully. Sep 9 05:27:53.472506 containerd[1591]: time="2025-09-09T05:27:53.472391995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:27:53.473820 containerd[1591]: time="2025-09-09T05:27:53.473741056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 05:27:53.475568 containerd[1591]: time="2025-09-09T05:27:53.475500706Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:27:53.477822 containerd[1591]: time="2025-09-09T05:27:53.477753372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 05:27:53.478461 containerd[1591]: time="2025-09-09T05:27:53.478422587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 578.369095ms" Sep 9 05:27:53.478461 containerd[1591]: time="2025-09-09T05:27:53.478455238Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 05:27:53.479474 containerd[1591]: time="2025-09-09T05:27:53.479428554Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 05:27:54.105866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905336443.mount: Deactivated successfully. Sep 9 05:27:56.694590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 05:27:56.696797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:27:57.149871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:27:57.168281 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 05:27:57.222259 kubelet[2252]: E0909 05:27:57.222185 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 05:27:57.226233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 05:27:57.226463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 05:27:57.226899 systemd[1]: kubelet.service: Consumed 255ms CPU time, 110.3M memory peak. Sep 9 05:27:57.623859 containerd[1591]: time="2025-09-09T05:27:57.623737711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:57.624842 containerd[1591]: time="2025-09-09T05:27:57.624805389Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 9 05:27:57.626341 containerd[1591]: time="2025-09-09T05:27:57.626292611Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:57.629473 containerd[1591]: time="2025-09-09T05:27:57.629440619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:27:57.630568 containerd[1591]: time="2025-09-09T05:27:57.630529147Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.151055157s" Sep 9 05:27:57.630568 containerd[1591]: time="2025-09-09T05:27:57.630565988Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 05:27:59.682890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:27:59.683080 systemd[1]: kubelet.service: Consumed 255ms CPU time, 110.3M memory peak. Sep 9 05:27:59.685527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:27:59.718132 systemd[1]: Reload requested from client PID 2292 ('systemctl') (unit session-9.scope)... Sep 9 05:27:59.718157 systemd[1]: Reloading... Sep 9 05:27:59.817960 zram_generator::config[2335]: No configuration found. Sep 9 05:28:00.195540 systemd[1]: Reloading finished in 476 ms. Sep 9 05:28:00.272088 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 05:28:00.272243 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 05:28:00.272681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:28:00.272739 systemd[1]: kubelet.service: Consumed 177ms CPU time, 98.4M memory peak. Sep 9 05:28:00.274778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:28:00.511637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:28:00.565622 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:28:00.626006 kubelet[2383]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:28:00.626006 kubelet[2383]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 05:28:00.626006 kubelet[2383]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:28:00.626006 kubelet[2383]: I0909 05:28:00.625624 2383 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:28:01.269560 kubelet[2383]: I0909 05:28:01.269482 2383 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 05:28:01.269560 kubelet[2383]: I0909 05:28:01.269529 2383 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:28:01.269874 kubelet[2383]: I0909 05:28:01.269840 2383 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 05:28:01.334764 kubelet[2383]: E0909 05:28:01.334693 2383 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:01.335416 kubelet[2383]: I0909 05:28:01.335391 2383 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:28:01.356218 kubelet[2383]: I0909 05:28:01.356173 2383 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:28:01.364076 kubelet[2383]: I0909 05:28:01.364033 2383 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:28:01.375593 kubelet[2383]: I0909 05:28:01.375545 2383 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 05:28:01.375793 kubelet[2383]: I0909 05:28:01.375745 2383 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:28:01.376018 kubelet[2383]: I0909 05:28:01.375780 2383 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:28:01.376244 kubelet[2383]: I0909 05:28:01.376027 2383 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:28:01.376244 kubelet[2383]: I0909 05:28:01.376036 2383 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 05:28:01.376244 kubelet[2383]: I0909 05:28:01.376194 2383 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:28:01.379060 kubelet[2383]: I0909 05:28:01.378998 2383 kubelet.go:408] "Attempting to sync node with API server" Sep 9 05:28:01.379110 kubelet[2383]: I0909 05:28:01.379096 2383 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:28:01.379218 kubelet[2383]: I0909 05:28:01.379195 2383 kubelet.go:314] "Adding apiserver pod source" Sep 9 05:28:01.379274 kubelet[2383]: I0909 05:28:01.379258 2383 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:28:01.382435 kubelet[2383]: I0909 05:28:01.382398 2383 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:28:01.383054 kubelet[2383]: I0909 05:28:01.383014 2383 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:28:01.386203 kubelet[2383]: W0909 05:28:01.386116 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:01.386284 kubelet[2383]: E0909 05:28:01.386207 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:01.386319 kubelet[2383]: W0909 05:28:01.386304 2383 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 05:28:01.388910 kubelet[2383]: I0909 05:28:01.388877 2383 server.go:1274] "Started kubelet" Sep 9 05:28:01.389489 kubelet[2383]: I0909 05:28:01.389393 2383 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:28:01.389957 kubelet[2383]: I0909 05:28:01.389931 2383 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:28:01.390076 kubelet[2383]: I0909 05:28:01.390038 2383 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:28:01.390566 kubelet[2383]: I0909 05:28:01.390526 2383 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:28:01.392115 kubelet[2383]: I0909 05:28:01.391283 2383 server.go:449] "Adding debug handlers to kubelet server" Sep 9 05:28:01.393693 kubelet[2383]: W0909 05:28:01.393620 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:01.393768 kubelet[2383]: E0909 05:28:01.393696 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:01.394637 kubelet[2383]: I0909 05:28:01.394612 2383 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:28:01.405484 kubelet[2383]: E0909 05:28:01.405444 2383 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:28:01.405795 kubelet[2383]: E0909 05:28:01.405743 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:01.405963 kubelet[2383]: I0909 05:28:01.405942 2383 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 05:28:01.406546 kubelet[2383]: I0909 05:28:01.406525 2383 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 05:28:01.406740 kubelet[2383]: I0909 05:28:01.406725 2383 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:28:01.407901 kubelet[2383]: W0909 05:28:01.407650 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:01.407901 kubelet[2383]: E0909 05:28:01.407781 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:01.408130 kubelet[2383]: I0909 05:28:01.408063 2383 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:28:01.408181 kubelet[2383]: E0909 05:28:01.408140 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="200ms" Sep 9 05:28:01.408226 kubelet[2383]: I0909 05:28:01.408185 2383 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:28:01.410085 kubelet[2383]: I0909 05:28:01.410060 2383 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:28:01.422429 kubelet[2383]: E0909 05:28:01.419882 2383 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863860fa6fcaecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 05:28:01.388834507 +0000 UTC m=+0.815823702,LastTimestamp:2025-09-09 05:28:01.388834507 +0000 UTC m=+0.815823702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 05:28:01.423579 kubelet[2383]: I0909 05:28:01.423547 2383 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 05:28:01.423579 kubelet[2383]: I0909 05:28:01.423571 2383 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 05:28:01.423672 kubelet[2383]: I0909 05:28:01.423598 2383 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:28:01.431270 kubelet[2383]: I0909 05:28:01.431191 2383 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:28:01.432729 kubelet[2383]: I0909 05:28:01.432692 2383 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:28:01.432780 kubelet[2383]: I0909 05:28:01.432736 2383 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 05:28:01.432780 kubelet[2383]: I0909 05:28:01.432778 2383 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 05:28:01.432871 kubelet[2383]: E0909 05:28:01.432833 2383 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:28:01.433956 kubelet[2383]: W0909 05:28:01.433794 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:01.433956 kubelet[2383]: E0909 05:28:01.433838 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:01.506087 kubelet[2383]: E0909 05:28:01.506002 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:01.533821 kubelet[2383]: E0909 05:28:01.533629 2383 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:28:01.606649 kubelet[2383]: E0909 05:28:01.606567 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:01.609419 kubelet[2383]: E0909 05:28:01.609352 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="400ms" Sep 9 05:28:01.707623 kubelet[2383]: E0909 05:28:01.707553 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:01.734129 kubelet[2383]: E0909 05:28:01.734026 2383 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:28:01.808629 kubelet[2383]: E0909 05:28:01.808481 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:01.908723 kubelet[2383]: E0909 05:28:01.908641 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.009282 kubelet[2383]: E0909 05:28:02.009207 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.010757 kubelet[2383]: E0909 05:28:02.010711 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="800ms" Sep 9 05:28:02.105672 kubelet[2383]: E0909 05:28:02.105406 2383 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863860fa6fcaecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 05:28:01.388834507 +0000 UTC m=+0.815823702,LastTimestamp:2025-09-09 05:28:01.388834507 +0000 UTC m=+0.815823702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 05:28:02.109719 kubelet[2383]: E0909 05:28:02.109650 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.134834 kubelet[2383]: E0909 05:28:02.134775 2383 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:28:02.210393 kubelet[2383]: E0909 05:28:02.210319 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.310994 kubelet[2383]: E0909 05:28:02.310899 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.411701 kubelet[2383]: E0909 05:28:02.411537 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.426152 kubelet[2383]: W0909 05:28:02.426120 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:02.426243 kubelet[2383]: E0909 05:28:02.426161 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:02.430820 kubelet[2383]: W0909 05:28:02.430773 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:02.430871 kubelet[2383]: E0909 05:28:02.430828 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:02.512730 kubelet[2383]: E0909 05:28:02.512650 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.540527 kubelet[2383]: W0909 05:28:02.540447 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:02.540605 kubelet[2383]: E0909 05:28:02.540535 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:02.613233 kubelet[2383]: E0909 05:28:02.613164 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.693197 kubelet[2383]: W0909 05:28:02.693046 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:02.693197 kubelet[2383]: E0909 05:28:02.693124 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:02.713859 kubelet[2383]: E0909 05:28:02.713763 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.812078 kubelet[2383]: E0909 05:28:02.811946 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="1.6s" Sep 9 05:28:02.814931 kubelet[2383]: E0909 05:28:02.814865 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.915123 kubelet[2383]: E0909 05:28:02.915053 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:02.935273 kubelet[2383]: E0909 05:28:02.935200 2383 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:28:03.015819 kubelet[2383]: E0909 05:28:03.015733 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:03.116380 kubelet[2383]: E0909 05:28:03.116295 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:03.217004 kubelet[2383]: E0909 05:28:03.216932 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:03.317466 kubelet[2383]: E0909 05:28:03.317317 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:03.418283 kubelet[2383]: E0909 05:28:03.418198 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:03.471498 kubelet[2383]: E0909 05:28:03.471424 2383 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:03.519255 kubelet[2383]: E0909 05:28:03.519184 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:03.619885 kubelet[2383]: E0909 05:28:03.619718 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:03.720310 kubelet[2383]: E0909 05:28:03.720238 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:03.821076 kubelet[2383]: E0909 05:28:03.820983 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:03.922286 kubelet[2383]: E0909 05:28:03.922099 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:04.022952 kubelet[2383]: E0909 05:28:04.022872 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:04.080627 kubelet[2383]: W0909 05:28:04.080557 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:04.080627 kubelet[2383]: E0909 05:28:04.080617 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:04.123336 kubelet[2383]: E0909 05:28:04.123248 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:04.223890 kubelet[2383]: E0909 05:28:04.223804 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:04.324540 kubelet[2383]: E0909 05:28:04.324455 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:04.412620 kubelet[2383]: E0909 05:28:04.412529 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="3.2s" Sep 9 05:28:04.424845 kubelet[2383]: E0909 05:28:04.424764 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:04.441626 kubelet[2383]: W0909 05:28:04.441588 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:04.441747 kubelet[2383]: E0909 05:28:04.441638 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:04.525703 kubelet[2383]: E0909 05:28:04.525504 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:04.535748 kubelet[2383]: E0909 05:28:04.535689 2383 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 05:28:04.613631 kubelet[2383]: W0909 05:28:04.613577 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:04.613631 kubelet[2383]: E0909 05:28:04.613624 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:04.626197 kubelet[2383]: E0909 05:28:04.626139 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:04.726991 kubelet[2383]: E0909 05:28:04.726896 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:04.827741 kubelet[2383]: E0909 05:28:04.827516 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:04.834461 kubelet[2383]: I0909 05:28:04.834426 2383 policy_none.go:49] "None policy: Start" Sep 9 05:28:04.835288 kubelet[2383]: I0909 05:28:04.835260 2383 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 05:28:04.835355 kubelet[2383]: I0909 05:28:04.835293 2383 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:28:04.928032 kubelet[2383]: E0909 05:28:04.927944 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:05.028732 kubelet[2383]: E0909 05:28:05.028643 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:05.095343 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 05:28:05.110325 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 05:28:05.113486 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 05:28:05.129563 kubelet[2383]: E0909 05:28:05.129521 2383 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 05:28:05.132986 kubelet[2383]: I0909 05:28:05.132905 2383 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:28:05.133227 kubelet[2383]: I0909 05:28:05.133208 2383 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:28:05.133277 kubelet[2383]: I0909 05:28:05.133232 2383 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:28:05.133485 kubelet[2383]: I0909 05:28:05.133468 2383 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:28:05.134740 kubelet[2383]: E0909 05:28:05.134719 2383 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 05:28:05.235040 kubelet[2383]: I0909 05:28:05.234993 2383 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 05:28:05.235553 kubelet[2383]: E0909 05:28:05.235511 2383 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Sep 9 05:28:05.437413 kubelet[2383]: I0909 05:28:05.437302 2383 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 05:28:05.437871 kubelet[2383]: E0909 05:28:05.437806 2383 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Sep 9 05:28:05.647499 kubelet[2383]: W0909 05:28:05.647425 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:05.647499 kubelet[2383]: E0909 05:28:05.647480 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:05.840349 kubelet[2383]: I0909 05:28:05.840297 2383 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 05:28:05.840846 kubelet[2383]: E0909 05:28:05.840761 2383 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Sep 9 05:28:06.414434 update_engine[1542]: I20250909 05:28:06.414314 1542 update_attempter.cc:509] Updating boot flags... Sep 9 05:28:06.643295 kubelet[2383]: I0909 05:28:06.642984 2383 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 05:28:06.643493 kubelet[2383]: E0909 05:28:06.643443 2383 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Sep 9 05:28:07.614089 kubelet[2383]: E0909 05:28:07.614024 2383 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.16:6443: connect: connection refused" interval="6.4s" Sep 9 05:28:07.696470 kubelet[2383]: E0909 05:28:07.696394 2383 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:07.748369 systemd[1]: Created slice kubepods-burstable-podce3664073a8ce6c29d22ffa014357d62.slice - libcontainer container kubepods-burstable-podce3664073a8ce6c29d22ffa014357d62.slice. Sep 9 05:28:07.767473 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 05:28:07.779174 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 05:28:07.847116 kubelet[2383]: I0909 05:28:07.847041 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce3664073a8ce6c29d22ffa014357d62-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce3664073a8ce6c29d22ffa014357d62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:28:07.847116 kubelet[2383]: I0909 05:28:07.847094 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce3664073a8ce6c29d22ffa014357d62-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce3664073a8ce6c29d22ffa014357d62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:28:07.847116 kubelet[2383]: I0909 05:28:07.847122 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:07.847350 kubelet[2383]: I0909 05:28:07.847141 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:07.847350 kubelet[2383]: I0909 05:28:07.847162 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:07.847350 kubelet[2383]: I0909 05:28:07.847199 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:07.847350 kubelet[2383]: I0909 05:28:07.847222 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:28:07.847350 kubelet[2383]: I0909 05:28:07.847249 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce3664073a8ce6c29d22ffa014357d62-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ce3664073a8ce6c29d22ffa014357d62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:28:07.847467 kubelet[2383]: I0909 05:28:07.847276 2383 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:08.067019 containerd[1591]: time="2025-09-09T05:28:08.066965954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ce3664073a8ce6c29d22ffa014357d62,Namespace:kube-system,Attempt:0,}" Sep 9 05:28:08.076768 containerd[1591]: time="2025-09-09T05:28:08.076709944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 05:28:08.082581 containerd[1591]: time="2025-09-09T05:28:08.082531273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 05:28:08.109215 containerd[1591]: time="2025-09-09T05:28:08.109145536Z" level=info msg="connecting to shim 4ded62162c9c8f088e8ce4a8a0b163903e5925a80b9b97f29ccc729a1eb25af9" address="unix:///run/containerd/s/33bca88e8613aef8371923370d79daf55552c1869841509718b577fce7895f05" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:28:08.122336 containerd[1591]: time="2025-09-09T05:28:08.122265868Z" level=info msg="connecting to shim 089353f09338509b6998b895739ffc07badff7662c4ee84dba30acd51fa57306" address="unix:///run/containerd/s/467fba0ede60fab9119f1ba7ea77d34a10f57affc34af8a001ad97a0d4fcc6ff" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:28:08.133040 containerd[1591]: time="2025-09-09T05:28:08.132984345Z" level=info msg="connecting to shim c9b58a9583c619d7ca8170f61a465cda53700032b1d341a5f9e0d82bab670a66" address="unix:///run/containerd/s/8634e380be06e22da65a286ff28772c24898950ae975ffad0812f5e95ec8d518" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:28:08.136856 kubelet[2383]: W0909 05:28:08.136794 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:08.136856 kubelet[2383]: E0909 05:28:08.136850 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:08.157511 kubelet[2383]: W0909 05:28:08.157455 2383 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.16:6443: connect: connection refused Sep 9 05:28:08.157635 kubelet[2383]: E0909 05:28:08.157519 2383 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.16:6443: connect: connection refused" logger="UnhandledError" Sep 9 05:28:08.179249 systemd[1]: Started cri-containerd-c9b58a9583c619d7ca8170f61a465cda53700032b1d341a5f9e0d82bab670a66.scope - libcontainer container c9b58a9583c619d7ca8170f61a465cda53700032b1d341a5f9e0d82bab670a66. Sep 9 05:28:08.185866 systemd[1]: Started cri-containerd-089353f09338509b6998b895739ffc07badff7662c4ee84dba30acd51fa57306.scope - libcontainer container 089353f09338509b6998b895739ffc07badff7662c4ee84dba30acd51fa57306. Sep 9 05:28:08.188688 systemd[1]: Started cri-containerd-4ded62162c9c8f088e8ce4a8a0b163903e5925a80b9b97f29ccc729a1eb25af9.scope - libcontainer container 4ded62162c9c8f088e8ce4a8a0b163903e5925a80b9b97f29ccc729a1eb25af9. Sep 9 05:28:08.245700 kubelet[2383]: I0909 05:28:08.245639 2383 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 05:28:08.246089 kubelet[2383]: E0909 05:28:08.246047 2383 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.16:6443/api/v1/nodes\": dial tcp 10.0.0.16:6443: connect: connection refused" node="localhost" Sep 9 05:28:08.360937 containerd[1591]: time="2025-09-09T05:28:08.360693959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ce3664073a8ce6c29d22ffa014357d62,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ded62162c9c8f088e8ce4a8a0b163903e5925a80b9b97f29ccc729a1eb25af9\"" Sep 9 05:28:08.362504 containerd[1591]: time="2025-09-09T05:28:08.362446533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9b58a9583c619d7ca8170f61a465cda53700032b1d341a5f9e0d82bab670a66\"" Sep 9 05:28:08.365031 containerd[1591]: time="2025-09-09T05:28:08.364949680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"089353f09338509b6998b895739ffc07badff7662c4ee84dba30acd51fa57306\"" Sep 9 05:28:08.365974 containerd[1591]: time="2025-09-09T05:28:08.365901194Z" level=info msg="CreateContainer within sandbox \"c9b58a9583c619d7ca8170f61a465cda53700032b1d341a5f9e0d82bab670a66\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 05:28:08.366411 containerd[1591]: time="2025-09-09T05:28:08.365903579Z" level=info msg="CreateContainer within sandbox \"4ded62162c9c8f088e8ce4a8a0b163903e5925a80b9b97f29ccc729a1eb25af9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 05:28:08.368867 containerd[1591]: time="2025-09-09T05:28:08.368788841Z" level=info msg="CreateContainer within sandbox \"089353f09338509b6998b895739ffc07badff7662c4ee84dba30acd51fa57306\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 05:28:08.383554 containerd[1591]: time="2025-09-09T05:28:08.383474050Z" level=info msg="Container a19a9fdd836f98229435c568910c8c51072f2168f698624b2f8f3e2584ce09e2: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:08.388105 containerd[1591]: time="2025-09-09T05:28:08.387854848Z" level=info msg="Container ce848cde2e508718ae411b10440a2190012ca24e08b7f13d4433ee514742e642: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:08.392761 containerd[1591]: time="2025-09-09T05:28:08.392705808Z" level=info msg="Container 14ee25eebfb489373ada82f9a73b3ae900991c62029e64daa5695c1260a5e9e5: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:08.398367 containerd[1591]: time="2025-09-09T05:28:08.398303153Z" level=info msg="CreateContainer within sandbox \"c9b58a9583c619d7ca8170f61a465cda53700032b1d341a5f9e0d82bab670a66\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a19a9fdd836f98229435c568910c8c51072f2168f698624b2f8f3e2584ce09e2\"" Sep 9 05:28:08.399145 containerd[1591]: time="2025-09-09T05:28:08.399100345Z" level=info msg="StartContainer for \"a19a9fdd836f98229435c568910c8c51072f2168f698624b2f8f3e2584ce09e2\"" Sep 9 05:28:08.400511 containerd[1591]: time="2025-09-09T05:28:08.400443503Z" level=info msg="connecting to shim a19a9fdd836f98229435c568910c8c51072f2168f698624b2f8f3e2584ce09e2" address="unix:///run/containerd/s/8634e380be06e22da65a286ff28772c24898950ae975ffad0812f5e95ec8d518" protocol=ttrpc version=3 Sep 9 05:28:08.408117 containerd[1591]: time="2025-09-09T05:28:08.408057993Z" level=info msg="CreateContainer within sandbox \"089353f09338509b6998b895739ffc07badff7662c4ee84dba30acd51fa57306\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"14ee25eebfb489373ada82f9a73b3ae900991c62029e64daa5695c1260a5e9e5\"" Sep 9 05:28:08.409100 containerd[1591]: time="2025-09-09T05:28:08.408872758Z" level=info msg="StartContainer for \"14ee25eebfb489373ada82f9a73b3ae900991c62029e64daa5695c1260a5e9e5\"" Sep 9 05:28:08.409484 containerd[1591]: time="2025-09-09T05:28:08.409442379Z" level=info msg="CreateContainer within sandbox \"4ded62162c9c8f088e8ce4a8a0b163903e5925a80b9b97f29ccc729a1eb25af9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ce848cde2e508718ae411b10440a2190012ca24e08b7f13d4433ee514742e642\"" Sep 9 05:28:08.410590 containerd[1591]: time="2025-09-09T05:28:08.410413991Z" level=info msg="connecting to shim 14ee25eebfb489373ada82f9a73b3ae900991c62029e64daa5695c1260a5e9e5" address="unix:///run/containerd/s/467fba0ede60fab9119f1ba7ea77d34a10f57affc34af8a001ad97a0d4fcc6ff" protocol=ttrpc version=3 Sep 9 05:28:08.411187 containerd[1591]: time="2025-09-09T05:28:08.411162551Z" level=info msg="StartContainer for \"ce848cde2e508718ae411b10440a2190012ca24e08b7f13d4433ee514742e642\"" Sep 9 05:28:08.414177 containerd[1591]: time="2025-09-09T05:28:08.412782392Z" level=info msg="connecting to shim ce848cde2e508718ae411b10440a2190012ca24e08b7f13d4433ee514742e642" address="unix:///run/containerd/s/33bca88e8613aef8371923370d79daf55552c1869841509718b577fce7895f05" protocol=ttrpc version=3 Sep 9 05:28:08.425153 systemd[1]: Started cri-containerd-a19a9fdd836f98229435c568910c8c51072f2168f698624b2f8f3e2584ce09e2.scope - libcontainer container a19a9fdd836f98229435c568910c8c51072f2168f698624b2f8f3e2584ce09e2. Sep 9 05:28:08.443117 systemd[1]: Started cri-containerd-14ee25eebfb489373ada82f9a73b3ae900991c62029e64daa5695c1260a5e9e5.scope - libcontainer container 14ee25eebfb489373ada82f9a73b3ae900991c62029e64daa5695c1260a5e9e5. Sep 9 05:28:08.445073 systemd[1]: Started cri-containerd-ce848cde2e508718ae411b10440a2190012ca24e08b7f13d4433ee514742e642.scope - libcontainer container ce848cde2e508718ae411b10440a2190012ca24e08b7f13d4433ee514742e642. Sep 9 05:28:08.542101 containerd[1591]: time="2025-09-09T05:28:08.542044765Z" level=info msg="StartContainer for \"14ee25eebfb489373ada82f9a73b3ae900991c62029e64daa5695c1260a5e9e5\" returns successfully" Sep 9 05:28:08.546281 containerd[1591]: time="2025-09-09T05:28:08.546219702Z" level=info msg="StartContainer for \"a19a9fdd836f98229435c568910c8c51072f2168f698624b2f8f3e2584ce09e2\" returns successfully" Sep 9 05:28:08.567373 containerd[1591]: time="2025-09-09T05:28:08.567311913Z" level=info msg="StartContainer for \"ce848cde2e508718ae411b10440a2190012ca24e08b7f13d4433ee514742e642\" returns successfully" Sep 9 05:28:11.388219 kubelet[2383]: I0909 05:28:11.388137 2383 apiserver.go:52] "Watching apiserver" Sep 9 05:28:11.407323 kubelet[2383]: I0909 05:28:11.407241 2383 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 05:28:11.448135 kubelet[2383]: I0909 05:28:11.448081 2383 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 05:28:11.468193 kubelet[2383]: I0909 05:28:11.468136 2383 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 05:28:11.492104 kubelet[2383]: E0909 05:28:11.492023 2383 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 9 05:28:11.492377 kubelet[2383]: E0909 05:28:11.492349 2383 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 05:28:13.497053 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-9.scope)... Sep 9 05:28:13.497070 systemd[1]: Reloading... Sep 9 05:28:13.617970 zram_generator::config[2714]: No configuration found. Sep 9 05:28:13.870903 systemd[1]: Reloading finished in 373 ms. Sep 9 05:28:13.900739 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:28:13.915444 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 05:28:13.915803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:28:13.915858 systemd[1]: kubelet.service: Consumed 1.474s CPU time, 133.6M memory peak. Sep 9 05:28:13.917941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 05:28:14.142744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 05:28:14.153473 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 05:28:14.242695 kubelet[2762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:28:14.242695 kubelet[2762]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 05:28:14.242695 kubelet[2762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 05:28:14.242695 kubelet[2762]: I0909 05:28:14.241956 2762 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 05:28:14.251491 kubelet[2762]: I0909 05:28:14.251390 2762 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 05:28:14.251491 kubelet[2762]: I0909 05:28:14.251450 2762 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 05:28:14.251811 kubelet[2762]: I0909 05:28:14.251788 2762 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 05:28:14.253761 kubelet[2762]: I0909 05:28:14.253342 2762 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 05:28:14.261618 kubelet[2762]: I0909 05:28:14.261495 2762 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 05:28:14.265965 kubelet[2762]: I0909 05:28:14.265932 2762 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 05:28:14.272021 kubelet[2762]: I0909 05:28:14.271103 2762 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 05:28:14.272021 kubelet[2762]: I0909 05:28:14.271218 2762 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 05:28:14.272021 kubelet[2762]: I0909 05:28:14.271340 2762 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 05:28:14.272021 kubelet[2762]: I0909 05:28:14.271366 2762 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 05:28:14.272295 kubelet[2762]: I0909 05:28:14.271540 2762 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 05:28:14.272295 kubelet[2762]: I0909 05:28:14.271548 2762 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 05:28:14.272295 kubelet[2762]: I0909 05:28:14.271576 2762 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:28:14.272295 kubelet[2762]: I0909 05:28:14.271701 2762 kubelet.go:408] "Attempting to sync node with API server" Sep 9 05:28:14.272295 kubelet[2762]: I0909 05:28:14.271713 2762 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 05:28:14.272295 kubelet[2762]: I0909 05:28:14.271745 2762 kubelet.go:314] "Adding apiserver pod source" Sep 9 05:28:14.272295 kubelet[2762]: I0909 05:28:14.271760 2762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 05:28:14.273334 kubelet[2762]: I0909 05:28:14.273304 2762 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 05:28:14.273684 kubelet[2762]: I0909 05:28:14.273646 2762 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 05:28:14.274086 kubelet[2762]: I0909 05:28:14.274054 2762 server.go:1274] "Started kubelet" Sep 9 05:28:14.277668 kubelet[2762]: I0909 05:28:14.277287 2762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 05:28:14.277972 kubelet[2762]: I0909 05:28:14.277947 2762 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 05:28:14.278078 kubelet[2762]: I0909 05:28:14.278037 2762 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 05:28:14.279172 kubelet[2762]: I0909 05:28:14.279130 2762 server.go:449] "Adding debug handlers to kubelet server" Sep 9 05:28:14.279906 kubelet[2762]: I0909 05:28:14.279878 2762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 05:28:14.280299 kubelet[2762]: I0909 05:28:14.280098 2762 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 05:28:14.284318 kubelet[2762]: I0909 05:28:14.284292 2762 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 05:28:14.288084 kubelet[2762]: E0909 05:28:14.288050 2762 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 05:28:14.289733 kubelet[2762]: I0909 05:28:14.289686 2762 factory.go:221] Registration of the systemd container factory successfully Sep 9 05:28:14.290191 kubelet[2762]: I0909 05:28:14.290150 2762 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 05:28:14.291452 kubelet[2762]: I0909 05:28:14.291155 2762 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 05:28:14.291452 kubelet[2762]: I0909 05:28:14.291353 2762 reconciler.go:26] "Reconciler: start to sync state" Sep 9 05:28:14.297350 kubelet[2762]: I0909 05:28:14.296508 2762 factory.go:221] Registration of the containerd container factory successfully Sep 9 05:28:14.299816 kubelet[2762]: I0909 05:28:14.299764 2762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 05:28:14.303011 kubelet[2762]: I0909 05:28:14.302971 2762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 05:28:14.303011 kubelet[2762]: I0909 05:28:14.303012 2762 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 05:28:14.303118 kubelet[2762]: I0909 05:28:14.303039 2762 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 05:28:14.303118 kubelet[2762]: E0909 05:28:14.303086 2762 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 05:28:14.334741 kubelet[2762]: I0909 05:28:14.334690 2762 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 05:28:14.334741 kubelet[2762]: I0909 05:28:14.334711 2762 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 05:28:14.334741 kubelet[2762]: I0909 05:28:14.334734 2762 state_mem.go:36] "Initialized new in-memory state store" Sep 9 05:28:14.335021 kubelet[2762]: I0909 05:28:14.334937 2762 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 05:28:14.335021 kubelet[2762]: I0909 05:28:14.334949 2762 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 05:28:14.335021 kubelet[2762]: I0909 05:28:14.334969 2762 policy_none.go:49] "None policy: Start" Sep 9 05:28:14.335591 kubelet[2762]: I0909 05:28:14.335570 2762 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 05:28:14.335591 kubelet[2762]: I0909 05:28:14.335593 2762 state_mem.go:35] "Initializing new in-memory state store" Sep 9 05:28:14.335800 kubelet[2762]: I0909 05:28:14.335778 2762 state_mem.go:75] "Updated machine memory state" Sep 9 05:28:14.341256 kubelet[2762]: I0909 05:28:14.341217 2762 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 05:28:14.341461 kubelet[2762]: I0909 05:28:14.341443 2762 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 05:28:14.341498 kubelet[2762]: I0909 05:28:14.341462 2762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 05:28:14.341710 kubelet[2762]: I0909 05:28:14.341687 2762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 05:28:14.446202 kubelet[2762]: I0909 05:28:14.446164 2762 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 05:28:14.492755 kubelet[2762]: I0909 05:28:14.492677 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:14.492755 kubelet[2762]: I0909 05:28:14.492736 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:14.492755 kubelet[2762]: I0909 05:28:14.492773 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 05:28:14.493109 kubelet[2762]: I0909 05:28:14.492797 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce3664073a8ce6c29d22ffa014357d62-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce3664073a8ce6c29d22ffa014357d62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:28:14.493109 kubelet[2762]: I0909 05:28:14.492817 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:14.493109 kubelet[2762]: I0909 05:28:14.492837 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:14.493109 kubelet[2762]: I0909 05:28:14.492859 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce3664073a8ce6c29d22ffa014357d62-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ce3664073a8ce6c29d22ffa014357d62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:28:14.493109 kubelet[2762]: I0909 05:28:14.492878 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:14.493259 kubelet[2762]: I0909 05:28:14.492895 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce3664073a8ce6c29d22ffa014357d62-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ce3664073a8ce6c29d22ffa014357d62\") " pod="kube-system/kube-apiserver-localhost" Sep 9 05:28:14.527750 kubelet[2762]: E0909 05:28:14.527622 2762 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 05:28:14.529730 kubelet[2762]: I0909 05:28:14.529682 2762 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 05:28:14.530004 kubelet[2762]: I0909 05:28:14.529776 2762 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 05:28:14.594272 sudo[2796]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 05:28:14.594756 sudo[2796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 05:28:15.227772 sudo[2796]: pam_unix(sudo:session): session closed for user root Sep 9 05:28:15.273533 kubelet[2762]: I0909 05:28:15.272680 2762 apiserver.go:52] "Watching apiserver" Sep 9 05:28:15.292636 kubelet[2762]: I0909 05:28:15.292548 2762 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 05:28:15.322241 kubelet[2762]: E0909 05:28:15.322196 2762 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 05:28:15.338185 kubelet[2762]: I0909 05:28:15.338099 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.338048691 podStartE2EDuration="1.338048691s" podCreationTimestamp="2025-09-09 05:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:28:15.338042019 +0000 UTC m=+1.179484809" watchObservedRunningTime="2025-09-09 05:28:15.338048691 +0000 UTC m=+1.179491451" Sep 9 05:28:15.354099 kubelet[2762]: I0909 05:28:15.354024 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.353996916 podStartE2EDuration="3.353996916s" podCreationTimestamp="2025-09-09 05:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:28:15.346407591 +0000 UTC m=+1.187850351" watchObservedRunningTime="2025-09-09 05:28:15.353996916 +0000 UTC m=+1.195439676" Sep 9 05:28:15.364198 kubelet[2762]: I0909 05:28:15.364124 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.364104679 podStartE2EDuration="1.364104679s" podCreationTimestamp="2025-09-09 05:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:28:15.354720423 +0000 UTC m=+1.196163183" watchObservedRunningTime="2025-09-09 05:28:15.364104679 +0000 UTC m=+1.205547449" Sep 9 05:28:17.711776 sudo[1800]: pam_unix(sudo:session): session closed for user root Sep 9 05:28:17.713590 sshd[1799]: Connection closed by 10.0.0.1 port 56824 Sep 9 05:28:17.714567 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Sep 9 05:28:17.735064 systemd[1]: sshd@8-10.0.0.16:22-10.0.0.1:56824.service: Deactivated successfully. Sep 9 05:28:17.737868 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 05:28:17.738186 systemd[1]: session-9.scope: Consumed 4.880s CPU time, 264.4M memory peak. Sep 9 05:28:17.739673 systemd-logind[1533]: Session 9 logged out. Waiting for processes to exit. Sep 9 05:28:17.741234 systemd-logind[1533]: Removed session 9. Sep 9 05:28:19.979628 kubelet[2762]: I0909 05:28:19.979566 2762 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 05:28:19.980395 kubelet[2762]: I0909 05:28:19.980151 2762 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 05:28:19.980457 containerd[1591]: time="2025-09-09T05:28:19.979972317Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 05:28:20.657377 systemd[1]: Created slice kubepods-besteffort-podda4587e7_749b_425a_99a3_0eeebf377d36.slice - libcontainer container kubepods-besteffort-podda4587e7_749b_425a_99a3_0eeebf377d36.slice. Sep 9 05:28:20.672823 systemd[1]: Created slice kubepods-burstable-pod0f22bafb_3aa7_4389_9029_67b34ad5fcd5.slice - libcontainer container kubepods-burstable-pod0f22bafb_3aa7_4389_9029_67b34ad5fcd5.slice. Sep 9 05:28:20.728904 kubelet[2762]: I0909 05:28:20.728832 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-hubble-tls\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.728904 kubelet[2762]: I0909 05:28:20.728880 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da4587e7-749b-425a-99a3-0eeebf377d36-kube-proxy\") pod \"kube-proxy-dwkfb\" (UID: \"da4587e7-749b-425a-99a3-0eeebf377d36\") " pod="kube-system/kube-proxy-dwkfb" Sep 9 05:28:20.728904 kubelet[2762]: I0909 05:28:20.728901 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-xtables-lock\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729167 kubelet[2762]: I0909 05:28:20.729053 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-cgroup\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729167 kubelet[2762]: I0909 05:28:20.729076 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cni-path\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729167 kubelet[2762]: I0909 05:28:20.729090 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-config-path\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729167 kubelet[2762]: I0909 05:28:20.729137 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-host-proc-sys-net\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729167 kubelet[2762]: I0909 05:28:20.729152 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-host-proc-sys-kernel\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729285 kubelet[2762]: I0909 05:28:20.729225 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8nnp\" (UniqueName: \"kubernetes.io/projected/da4587e7-749b-425a-99a3-0eeebf377d36-kube-api-access-d8nnp\") pod \"kube-proxy-dwkfb\" (UID: \"da4587e7-749b-425a-99a3-0eeebf377d36\") " pod="kube-system/kube-proxy-dwkfb" Sep 9 05:28:20.729311 kubelet[2762]: I0909 05:28:20.729299 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-lib-modules\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729337 kubelet[2762]: I0909 05:28:20.729321 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-bpf-maps\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729394 kubelet[2762]: I0909 05:28:20.729335 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfv8q\" (UniqueName: \"kubernetes.io/projected/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-kube-api-access-pfv8q\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729427 kubelet[2762]: I0909 05:28:20.729396 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-clustermesh-secrets\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729427 kubelet[2762]: I0909 05:28:20.729410 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da4587e7-749b-425a-99a3-0eeebf377d36-xtables-lock\") pod \"kube-proxy-dwkfb\" (UID: \"da4587e7-749b-425a-99a3-0eeebf377d36\") " pod="kube-system/kube-proxy-dwkfb" Sep 9 05:28:20.729593 kubelet[2762]: I0909 05:28:20.729565 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da4587e7-749b-425a-99a3-0eeebf377d36-lib-modules\") pod \"kube-proxy-dwkfb\" (UID: \"da4587e7-749b-425a-99a3-0eeebf377d36\") " pod="kube-system/kube-proxy-dwkfb" Sep 9 05:28:20.729593 kubelet[2762]: I0909 05:28:20.729590 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-etc-cni-netd\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729998 kubelet[2762]: I0909 05:28:20.729641 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-run\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.729998 kubelet[2762]: I0909 05:28:20.729656 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-hostproc\") pod \"cilium-mqg6d\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " pod="kube-system/cilium-mqg6d" Sep 9 05:28:20.867849 systemd[1]: Created slice kubepods-besteffort-podcaa17d5a_131a_435e_b65a_a2953a95fa45.slice - libcontainer container kubepods-besteffort-podcaa17d5a_131a_435e_b65a_a2953a95fa45.slice. Sep 9 05:28:20.931099 kubelet[2762]: I0909 05:28:20.930890 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxllv\" (UniqueName: \"kubernetes.io/projected/caa17d5a-131a-435e-b65a-a2953a95fa45-kube-api-access-rxllv\") pod \"cilium-operator-5d85765b45-5dkj6\" (UID: \"caa17d5a-131a-435e-b65a-a2953a95fa45\") " pod="kube-system/cilium-operator-5d85765b45-5dkj6" Sep 9 05:28:20.931099 kubelet[2762]: I0909 05:28:20.930994 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/caa17d5a-131a-435e-b65a-a2953a95fa45-cilium-config-path\") pod \"cilium-operator-5d85765b45-5dkj6\" (UID: \"caa17d5a-131a-435e-b65a-a2953a95fa45\") " pod="kube-system/cilium-operator-5d85765b45-5dkj6" Sep 9 05:28:20.971422 containerd[1591]: time="2025-09-09T05:28:20.971351948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dwkfb,Uid:da4587e7-749b-425a-99a3-0eeebf377d36,Namespace:kube-system,Attempt:0,}" Sep 9 05:28:20.991535 containerd[1591]: time="2025-09-09T05:28:20.991346454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqg6d,Uid:0f22bafb-3aa7-4389-9029-67b34ad5fcd5,Namespace:kube-system,Attempt:0,}" Sep 9 05:28:20.994756 containerd[1591]: time="2025-09-09T05:28:20.994693504Z" level=info msg="connecting to shim c6d6d446e8cd23a7bfd89613f15af13bb99a21d80c3f78c2e632769a55310f45" address="unix:///run/containerd/s/16a74b5d2fa0981e98198e3140e76b5466b65ea900d61cfccd948b0f665652f2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:28:21.018661 containerd[1591]: time="2025-09-09T05:28:21.018599594Z" level=info msg="connecting to shim 537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597" address="unix:///run/containerd/s/a409c6b00ff1e998593d944c107dc6c86fa8cc5033c2142085a177ca0557be6c" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:28:21.030195 systemd[1]: Started cri-containerd-c6d6d446e8cd23a7bfd89613f15af13bb99a21d80c3f78c2e632769a55310f45.scope - libcontainer container c6d6d446e8cd23a7bfd89613f15af13bb99a21d80c3f78c2e632769a55310f45. Sep 9 05:28:21.053064 systemd[1]: Started cri-containerd-537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597.scope - libcontainer container 537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597. Sep 9 05:28:21.135018 containerd[1591]: time="2025-09-09T05:28:21.134957701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dwkfb,Uid:da4587e7-749b-425a-99a3-0eeebf377d36,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6d6d446e8cd23a7bfd89613f15af13bb99a21d80c3f78c2e632769a55310f45\"" Sep 9 05:28:21.138059 containerd[1591]: time="2025-09-09T05:28:21.138002830Z" level=info msg="CreateContainer within sandbox \"c6d6d446e8cd23a7bfd89613f15af13bb99a21d80c3f78c2e632769a55310f45\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 05:28:21.139194 containerd[1591]: time="2025-09-09T05:28:21.139168377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mqg6d,Uid:0f22bafb-3aa7-4389-9029-67b34ad5fcd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\"" Sep 9 05:28:21.140741 containerd[1591]: time="2025-09-09T05:28:21.140708400Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 05:28:21.151302 containerd[1591]: time="2025-09-09T05:28:21.151259696Z" level=info msg="Container 60d890a0dc73163eed185773cf643cfbe35cb22b6ebfa9e9de93761b897a19c4: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:21.160409 containerd[1591]: time="2025-09-09T05:28:21.160341411Z" level=info msg="CreateContainer within sandbox \"c6d6d446e8cd23a7bfd89613f15af13bb99a21d80c3f78c2e632769a55310f45\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60d890a0dc73163eed185773cf643cfbe35cb22b6ebfa9e9de93761b897a19c4\"" Sep 9 05:28:21.161149 containerd[1591]: time="2025-09-09T05:28:21.161091245Z" level=info msg="StartContainer for \"60d890a0dc73163eed185773cf643cfbe35cb22b6ebfa9e9de93761b897a19c4\"" Sep 9 05:28:21.163054 containerd[1591]: time="2025-09-09T05:28:21.162719784Z" level=info msg="connecting to shim 60d890a0dc73163eed185773cf643cfbe35cb22b6ebfa9e9de93761b897a19c4" address="unix:///run/containerd/s/16a74b5d2fa0981e98198e3140e76b5466b65ea900d61cfccd948b0f665652f2" protocol=ttrpc version=3 Sep 9 05:28:21.171689 containerd[1591]: time="2025-09-09T05:28:21.171633623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5dkj6,Uid:caa17d5a-131a-435e-b65a-a2953a95fa45,Namespace:kube-system,Attempt:0,}" Sep 9 05:28:21.198034 containerd[1591]: time="2025-09-09T05:28:21.197962996Z" level=info msg="connecting to shim 07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf" address="unix:///run/containerd/s/b0b80323895bda1fb958e025db922245a5483a79db0227b6fd692db50d1c9c56" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:28:21.199210 systemd[1]: Started cri-containerd-60d890a0dc73163eed185773cf643cfbe35cb22b6ebfa9e9de93761b897a19c4.scope - libcontainer container 60d890a0dc73163eed185773cf643cfbe35cb22b6ebfa9e9de93761b897a19c4. Sep 9 05:28:21.238341 systemd[1]: Started cri-containerd-07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf.scope - libcontainer container 07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf. Sep 9 05:28:21.268681 containerd[1591]: time="2025-09-09T05:28:21.268625432Z" level=info msg="StartContainer for \"60d890a0dc73163eed185773cf643cfbe35cb22b6ebfa9e9de93761b897a19c4\" returns successfully" Sep 9 05:28:21.293111 containerd[1591]: time="2025-09-09T05:28:21.292748547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5dkj6,Uid:caa17d5a-131a-435e-b65a-a2953a95fa45,Namespace:kube-system,Attempt:0,} returns sandbox id \"07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf\"" Sep 9 05:28:21.344068 kubelet[2762]: I0909 05:28:21.343985 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dwkfb" podStartSLOduration=1.343963952 podStartE2EDuration="1.343963952s" podCreationTimestamp="2025-09-09 05:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:28:21.343839147 +0000 UTC m=+7.185281907" watchObservedRunningTime="2025-09-09 05:28:21.343963952 +0000 UTC m=+7.185406722" Sep 9 05:28:29.622213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount247341165.mount: Deactivated successfully. Sep 9 05:28:32.798278 containerd[1591]: time="2025-09-09T05:28:32.798167110Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:28:32.798940 containerd[1591]: time="2025-09-09T05:28:32.798887244Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 05:28:32.800089 containerd[1591]: time="2025-09-09T05:28:32.800030854Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:28:32.801737 containerd[1591]: time="2025-09-09T05:28:32.801699140Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.660956546s" Sep 9 05:28:32.801737 containerd[1591]: time="2025-09-09T05:28:32.801737794Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 05:28:32.805797 containerd[1591]: time="2025-09-09T05:28:32.805734297Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 05:28:32.814648 containerd[1591]: time="2025-09-09T05:28:32.814584306Z" level=info msg="CreateContainer within sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:28:32.822650 containerd[1591]: time="2025-09-09T05:28:32.822580920Z" level=info msg="Container 1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:32.827070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588109323.mount: Deactivated successfully. Sep 9 05:28:32.832528 containerd[1591]: time="2025-09-09T05:28:32.832470815Z" level=info msg="CreateContainer within sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123\"" Sep 9 05:28:32.833075 containerd[1591]: time="2025-09-09T05:28:32.833039394Z" level=info msg="StartContainer for \"1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123\"" Sep 9 05:28:32.833856 containerd[1591]: time="2025-09-09T05:28:32.833815843Z" level=info msg="connecting to shim 1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123" address="unix:///run/containerd/s/a409c6b00ff1e998593d944c107dc6c86fa8cc5033c2142085a177ca0557be6c" protocol=ttrpc version=3 Sep 9 05:28:32.905303 systemd[1]: Started cri-containerd-1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123.scope - libcontainer container 1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123. Sep 9 05:28:32.947154 containerd[1591]: time="2025-09-09T05:28:32.947079162Z" level=info msg="StartContainer for \"1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123\" returns successfully" Sep 9 05:28:32.962306 systemd[1]: cri-containerd-1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123.scope: Deactivated successfully. Sep 9 05:28:32.964461 containerd[1591]: time="2025-09-09T05:28:32.964410588Z" level=info msg="received exit event container_id:\"1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123\" id:\"1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123\" pid:3180 exited_at:{seconds:1757395712 nanos:963934954}" Sep 9 05:28:32.964556 containerd[1591]: time="2025-09-09T05:28:32.964484727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123\" id:\"1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123\" pid:3180 exited_at:{seconds:1757395712 nanos:963934954}" Sep 9 05:28:32.988549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123-rootfs.mount: Deactivated successfully. Sep 9 05:28:34.402339 containerd[1591]: time="2025-09-09T05:28:34.402284180Z" level=info msg="CreateContainer within sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:28:35.436106 containerd[1591]: time="2025-09-09T05:28:35.436042475Z" level=info msg="Container b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:35.485307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076158355.mount: Deactivated successfully. Sep 9 05:28:35.770333 containerd[1591]: time="2025-09-09T05:28:35.770251922Z" level=info msg="CreateContainer within sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a\"" Sep 9 05:28:35.771102 containerd[1591]: time="2025-09-09T05:28:35.771042207Z" level=info msg="StartContainer for \"b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a\"" Sep 9 05:28:35.772385 containerd[1591]: time="2025-09-09T05:28:35.772343241Z" level=info msg="connecting to shim b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a" address="unix:///run/containerd/s/a409c6b00ff1e998593d944c107dc6c86fa8cc5033c2142085a177ca0557be6c" protocol=ttrpc version=3 Sep 9 05:28:35.795074 systemd[1]: Started cri-containerd-b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a.scope - libcontainer container b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a. Sep 9 05:28:35.923351 containerd[1591]: time="2025-09-09T05:28:35.923288070Z" level=info msg="StartContainer for \"b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a\" returns successfully" Sep 9 05:28:35.951443 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 05:28:35.951789 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:28:35.952116 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:28:35.954087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 05:28:35.956348 systemd[1]: cri-containerd-b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a.scope: Deactivated successfully. Sep 9 05:28:35.956540 containerd[1591]: time="2025-09-09T05:28:35.956482894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a\" id:\"b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a\" pid:3229 exited_at:{seconds:1757395715 nanos:956109622}" Sep 9 05:28:35.957016 containerd[1591]: time="2025-09-09T05:28:35.956972014Z" level=info msg="received exit event container_id:\"b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a\" id:\"b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a\" pid:3229 exited_at:{seconds:1757395715 nanos:956109622}" Sep 9 05:28:36.018386 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 05:28:36.437598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a-rootfs.mount: Deactivated successfully. Sep 9 05:28:37.411431 containerd[1591]: time="2025-09-09T05:28:37.411361571Z" level=info msg="CreateContainer within sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:28:37.933678 containerd[1591]: time="2025-09-09T05:28:37.933605037Z" level=info msg="Container dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:37.940618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount402580575.mount: Deactivated successfully. Sep 9 05:28:38.446699 containerd[1591]: time="2025-09-09T05:28:38.446651926Z" level=info msg="CreateContainer within sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9\"" Sep 9 05:28:38.447823 containerd[1591]: time="2025-09-09T05:28:38.447801867Z" level=info msg="StartContainer for \"dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9\"" Sep 9 05:28:38.449507 containerd[1591]: time="2025-09-09T05:28:38.449483567Z" level=info msg="connecting to shim dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9" address="unix:///run/containerd/s/a409c6b00ff1e998593d944c107dc6c86fa8cc5033c2142085a177ca0557be6c" protocol=ttrpc version=3 Sep 9 05:28:38.484266 systemd[1]: Started cri-containerd-dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9.scope - libcontainer container dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9. Sep 9 05:28:38.545727 systemd[1]: cri-containerd-dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9.scope: Deactivated successfully. Sep 9 05:28:38.547777 containerd[1591]: time="2025-09-09T05:28:38.547718050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9\" id:\"dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9\" pid:3288 exited_at:{seconds:1757395718 nanos:547213322}" Sep 9 05:28:38.589329 containerd[1591]: time="2025-09-09T05:28:38.589229068Z" level=info msg="received exit event container_id:\"dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9\" id:\"dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9\" pid:3288 exited_at:{seconds:1757395718 nanos:547213322}" Sep 9 05:28:38.591378 containerd[1591]: time="2025-09-09T05:28:38.591321639Z" level=info msg="StartContainer for \"dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9\" returns successfully" Sep 9 05:28:38.616832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9-rootfs.mount: Deactivated successfully. Sep 9 05:28:39.103839 containerd[1591]: time="2025-09-09T05:28:39.103732861Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:28:39.104983 containerd[1591]: time="2025-09-09T05:28:39.104941090Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 05:28:39.107953 containerd[1591]: time="2025-09-09T05:28:39.106973699Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 05:28:39.108222 containerd[1591]: time="2025-09-09T05:28:39.108185875Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.30239905s" Sep 9 05:28:39.108276 containerd[1591]: time="2025-09-09T05:28:39.108222114Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 05:28:39.110989 containerd[1591]: time="2025-09-09T05:28:39.110945660Z" level=info msg="CreateContainer within sandbox \"07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 05:28:39.119390 containerd[1591]: time="2025-09-09T05:28:39.119330361Z" level=info msg="Container 20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:39.129138 containerd[1591]: time="2025-09-09T05:28:39.129062793Z" level=info msg="CreateContainer within sandbox \"07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\"" Sep 9 05:28:39.129788 containerd[1591]: time="2025-09-09T05:28:39.129731389Z" level=info msg="StartContainer for \"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\"" Sep 9 05:28:39.130653 containerd[1591]: time="2025-09-09T05:28:39.130627173Z" level=info msg="connecting to shim 20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813" address="unix:///run/containerd/s/b0b80323895bda1fb958e025db922245a5483a79db0227b6fd692db50d1c9c56" protocol=ttrpc version=3 Sep 9 05:28:39.161202 systemd[1]: Started cri-containerd-20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813.scope - libcontainer container 20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813. Sep 9 05:28:39.216017 containerd[1591]: time="2025-09-09T05:28:39.215962436Z" level=info msg="StartContainer for \"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\" returns successfully" Sep 9 05:28:39.428449 containerd[1591]: time="2025-09-09T05:28:39.428280220Z" level=info msg="CreateContainer within sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:28:39.453584 containerd[1591]: time="2025-09-09T05:28:39.452275776Z" level=info msg="Container 61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:39.468563 containerd[1591]: time="2025-09-09T05:28:39.468459827Z" level=info msg="CreateContainer within sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567\"" Sep 9 05:28:39.470808 containerd[1591]: time="2025-09-09T05:28:39.470754939Z" level=info msg="StartContainer for \"61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567\"" Sep 9 05:28:39.475051 containerd[1591]: time="2025-09-09T05:28:39.474742610Z" level=info msg="connecting to shim 61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567" address="unix:///run/containerd/s/a409c6b00ff1e998593d944c107dc6c86fa8cc5033c2142085a177ca0557be6c" protocol=ttrpc version=3 Sep 9 05:28:39.476217 kubelet[2762]: I0909 05:28:39.475540 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5dkj6" podStartSLOduration=1.6606190889999999 podStartE2EDuration="19.475511294s" podCreationTimestamp="2025-09-09 05:28:20 +0000 UTC" firstStartedPulling="2025-09-09 05:28:21.294294591 +0000 UTC m=+7.135737351" lastFinishedPulling="2025-09-09 05:28:39.109186796 +0000 UTC m=+24.950629556" observedRunningTime="2025-09-09 05:28:39.443466888 +0000 UTC m=+25.284909668" watchObservedRunningTime="2025-09-09 05:28:39.475511294 +0000 UTC m=+25.316954054" Sep 9 05:28:39.505602 systemd[1]: Started cri-containerd-61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567.scope - libcontainer container 61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567. Sep 9 05:28:39.563221 systemd[1]: cri-containerd-61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567.scope: Deactivated successfully. Sep 9 05:28:39.568535 containerd[1591]: time="2025-09-09T05:28:39.568461053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567\" id:\"61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567\" pid:3365 exited_at:{seconds:1757395719 nanos:566655892}" Sep 9 05:28:39.570390 containerd[1591]: time="2025-09-09T05:28:39.567152885Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f22bafb_3aa7_4389_9029_67b34ad5fcd5.slice/cri-containerd-61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567.scope/memory.events\": no such file or directory" Sep 9 05:28:39.577957 containerd[1591]: time="2025-09-09T05:28:39.575718285Z" level=info msg="received exit event container_id:\"61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567\" id:\"61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567\" pid:3365 exited_at:{seconds:1757395719 nanos:566655892}" Sep 9 05:28:39.588955 containerd[1591]: time="2025-09-09T05:28:39.588689893Z" level=info msg="StartContainer for \"61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567\" returns successfully" Sep 9 05:28:40.434744 containerd[1591]: time="2025-09-09T05:28:40.434677874Z" level=info msg="CreateContainer within sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:28:40.506955 containerd[1591]: time="2025-09-09T05:28:40.506775619Z" level=info msg="Container 89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:40.510650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545572716.mount: Deactivated successfully. Sep 9 05:28:40.522955 containerd[1591]: time="2025-09-09T05:28:40.522883575Z" level=info msg="CreateContainer within sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\"" Sep 9 05:28:40.523548 containerd[1591]: time="2025-09-09T05:28:40.523509010Z" level=info msg="StartContainer for \"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\"" Sep 9 05:28:40.524859 containerd[1591]: time="2025-09-09T05:28:40.524722649Z" level=info msg="connecting to shim 89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4" address="unix:///run/containerd/s/a409c6b00ff1e998593d944c107dc6c86fa8cc5033c2142085a177ca0557be6c" protocol=ttrpc version=3 Sep 9 05:28:40.551303 systemd[1]: Started cri-containerd-89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4.scope - libcontainer container 89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4. Sep 9 05:28:40.610162 containerd[1591]: time="2025-09-09T05:28:40.610040977Z" level=info msg="StartContainer for \"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" returns successfully" Sep 9 05:28:40.721265 containerd[1591]: time="2025-09-09T05:28:40.721194461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" id:\"db83027a28df7733a2b4cec4f72ee751835ebfc1b19b9366aff05262800d34c6\" pid:3435 exited_at:{seconds:1757395720 nanos:719734429}" Sep 9 05:28:40.785739 kubelet[2762]: I0909 05:28:40.785661 2762 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 05:28:40.987088 systemd[1]: Created slice kubepods-burstable-pod051d70ea_982a_4002_ba23_847c645f8966.slice - libcontainer container kubepods-burstable-pod051d70ea_982a_4002_ba23_847c645f8966.slice. Sep 9 05:28:40.996022 systemd[1]: Created slice kubepods-burstable-podf8ee62e6_1cc1_488c_b7ee_7d97a53c5230.slice - libcontainer container kubepods-burstable-podf8ee62e6_1cc1_488c_b7ee_7d97a53c5230.slice. Sep 9 05:28:41.157411 kubelet[2762]: I0909 05:28:41.157216 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mgkm\" (UniqueName: \"kubernetes.io/projected/f8ee62e6-1cc1-488c-b7ee-7d97a53c5230-kube-api-access-5mgkm\") pod \"coredns-7c65d6cfc9-r8hzj\" (UID: \"f8ee62e6-1cc1-488c-b7ee-7d97a53c5230\") " pod="kube-system/coredns-7c65d6cfc9-r8hzj" Sep 9 05:28:41.157411 kubelet[2762]: I0909 05:28:41.157279 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw4qj\" (UniqueName: \"kubernetes.io/projected/051d70ea-982a-4002-ba23-847c645f8966-kube-api-access-mw4qj\") pod \"coredns-7c65d6cfc9-lns9k\" (UID: \"051d70ea-982a-4002-ba23-847c645f8966\") " pod="kube-system/coredns-7c65d6cfc9-lns9k" Sep 9 05:28:41.157411 kubelet[2762]: I0909 05:28:41.157307 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f8ee62e6-1cc1-488c-b7ee-7d97a53c5230-config-volume\") pod \"coredns-7c65d6cfc9-r8hzj\" (UID: \"f8ee62e6-1cc1-488c-b7ee-7d97a53c5230\") " pod="kube-system/coredns-7c65d6cfc9-r8hzj" Sep 9 05:28:41.157411 kubelet[2762]: I0909 05:28:41.157344 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/051d70ea-982a-4002-ba23-847c645f8966-config-volume\") pod \"coredns-7c65d6cfc9-lns9k\" (UID: \"051d70ea-982a-4002-ba23-847c645f8966\") " pod="kube-system/coredns-7c65d6cfc9-lns9k" Sep 9 05:28:41.292513 containerd[1591]: time="2025-09-09T05:28:41.292373875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lns9k,Uid:051d70ea-982a-4002-ba23-847c645f8966,Namespace:kube-system,Attempt:0,}" Sep 9 05:28:41.302684 containerd[1591]: time="2025-09-09T05:28:41.302601595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r8hzj,Uid:f8ee62e6-1cc1-488c-b7ee-7d97a53c5230,Namespace:kube-system,Attempt:0,}" Sep 9 05:28:41.536625 kubelet[2762]: I0909 05:28:41.536551 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mqg6d" podStartSLOduration=9.871439744 podStartE2EDuration="21.536525045s" podCreationTimestamp="2025-09-09 05:28:20 +0000 UTC" firstStartedPulling="2025-09-09 05:28:21.14033204 +0000 UTC m=+6.981774800" lastFinishedPulling="2025-09-09 05:28:32.805417341 +0000 UTC m=+18.646860101" observedRunningTime="2025-09-09 05:28:41.534787942 +0000 UTC m=+27.376230722" watchObservedRunningTime="2025-09-09 05:28:41.536525045 +0000 UTC m=+27.377967805" Sep 9 05:28:43.305198 systemd-networkd[1472]: cilium_host: Link UP Sep 9 05:28:43.305395 systemd-networkd[1472]: cilium_net: Link UP Sep 9 05:28:43.305592 systemd-networkd[1472]: cilium_net: Gained carrier Sep 9 05:28:43.305760 systemd-networkd[1472]: cilium_host: Gained carrier Sep 9 05:28:43.515896 systemd-networkd[1472]: cilium_vxlan: Link UP Sep 9 05:28:43.515933 systemd-networkd[1472]: cilium_vxlan: Gained carrier Sep 9 05:28:43.773967 kernel: NET: Registered PF_ALG protocol family Sep 9 05:28:44.152321 systemd-networkd[1472]: cilium_host: Gained IPv6LL Sep 9 05:28:44.216198 systemd-networkd[1472]: cilium_net: Gained IPv6LL Sep 9 05:28:44.536179 systemd-networkd[1472]: cilium_vxlan: Gained IPv6LL Sep 9 05:28:44.643673 systemd-networkd[1472]: lxc_health: Link UP Sep 9 05:28:44.644068 systemd-networkd[1472]: lxc_health: Gained carrier Sep 9 05:28:44.810967 kernel: eth0: renamed from tmp5f488 Sep 9 05:28:44.814353 systemd-networkd[1472]: lxcc13f2701383c: Link UP Sep 9 05:28:44.816300 systemd-networkd[1472]: lxcc13f2701383c: Gained carrier Sep 9 05:28:44.891963 kernel: eth0: renamed from tmp2ce02 Sep 9 05:28:44.892802 systemd-networkd[1472]: lxc5d8ccf701715: Link UP Sep 9 05:28:44.893370 systemd-networkd[1472]: lxc5d8ccf701715: Gained carrier Sep 9 05:28:45.880276 systemd-networkd[1472]: lxc_health: Gained IPv6LL Sep 9 05:28:46.072329 systemd-networkd[1472]: lxcc13f2701383c: Gained IPv6LL Sep 9 05:28:46.136893 systemd-networkd[1472]: lxc5d8ccf701715: Gained IPv6LL Sep 9 05:28:46.288121 kubelet[2762]: I0909 05:28:46.288040 2762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 05:28:49.211642 systemd[1]: Started sshd@9-10.0.0.16:22-10.0.0.1:48824.service - OpenSSH per-connection server daemon (10.0.0.1:48824). Sep 9 05:28:49.352054 sshd[3919]: Accepted publickey for core from 10.0.0.1 port 48824 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:28:49.354174 sshd-session[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:28:49.369127 systemd-logind[1533]: New session 10 of user core. Sep 9 05:28:49.374274 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 05:28:49.729808 sshd[3922]: Connection closed by 10.0.0.1 port 48824 Sep 9 05:28:49.730223 sshd-session[3919]: pam_unix(sshd:session): session closed for user core Sep 9 05:28:49.735087 systemd[1]: sshd@9-10.0.0.16:22-10.0.0.1:48824.service: Deactivated successfully. Sep 9 05:28:49.737252 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 05:28:49.738289 systemd-logind[1533]: Session 10 logged out. Waiting for processes to exit. Sep 9 05:28:49.739596 systemd-logind[1533]: Removed session 10. Sep 9 05:28:50.002736 containerd[1591]: time="2025-09-09T05:28:50.002279810Z" level=info msg="connecting to shim 5f48833ce43ddacbe4fcbc720462d33b2c52bc92e906c8e958ef494f3ea5e7e8" address="unix:///run/containerd/s/d24e2c74108428c3ac84f04c00cc868f47e6ecbe8b265e2aafbf2a41f22534cf" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:28:50.006399 containerd[1591]: time="2025-09-09T05:28:50.006286601Z" level=info msg="connecting to shim 2ce02c3b3d2700f0f9482501002f0a51a2e4ccf39034d8032f9ce852518011b7" address="unix:///run/containerd/s/aee393bd43fd63f0c976e142c12a9c477ada07ac3f7a083162b657bdb7c750e6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:28:50.038160 systemd[1]: Started cri-containerd-2ce02c3b3d2700f0f9482501002f0a51a2e4ccf39034d8032f9ce852518011b7.scope - libcontainer container 2ce02c3b3d2700f0f9482501002f0a51a2e4ccf39034d8032f9ce852518011b7. Sep 9 05:28:50.043654 systemd[1]: Started cri-containerd-5f48833ce43ddacbe4fcbc720462d33b2c52bc92e906c8e958ef494f3ea5e7e8.scope - libcontainer container 5f48833ce43ddacbe4fcbc720462d33b2c52bc92e906c8e958ef494f3ea5e7e8. Sep 9 05:28:50.059412 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:28:50.067159 systemd-resolved[1412]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 05:28:50.108092 containerd[1591]: time="2025-09-09T05:28:50.107984197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-r8hzj,Uid:f8ee62e6-1cc1-488c-b7ee-7d97a53c5230,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ce02c3b3d2700f0f9482501002f0a51a2e4ccf39034d8032f9ce852518011b7\"" Sep 9 05:28:50.113144 containerd[1591]: time="2025-09-09T05:28:50.113070274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lns9k,Uid:051d70ea-982a-4002-ba23-847c645f8966,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f48833ce43ddacbe4fcbc720462d33b2c52bc92e906c8e958ef494f3ea5e7e8\"" Sep 9 05:28:50.114185 containerd[1591]: time="2025-09-09T05:28:50.114037700Z" level=info msg="CreateContainer within sandbox \"2ce02c3b3d2700f0f9482501002f0a51a2e4ccf39034d8032f9ce852518011b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:28:50.116478 containerd[1591]: time="2025-09-09T05:28:50.116365930Z" level=info msg="CreateContainer within sandbox \"5f48833ce43ddacbe4fcbc720462d33b2c52bc92e906c8e958ef494f3ea5e7e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 05:28:50.131755 containerd[1591]: time="2025-09-09T05:28:50.131540113Z" level=info msg="Container 421169b38fa5ec8a40f0414043adb67e091d26f398bfefae608376f3eb070df6: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:50.145577 containerd[1591]: time="2025-09-09T05:28:50.145504336Z" level=info msg="CreateContainer within sandbox \"2ce02c3b3d2700f0f9482501002f0a51a2e4ccf39034d8032f9ce852518011b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"421169b38fa5ec8a40f0414043adb67e091d26f398bfefae608376f3eb070df6\"" Sep 9 05:28:50.146267 containerd[1591]: time="2025-09-09T05:28:50.146216263Z" level=info msg="StartContainer for \"421169b38fa5ec8a40f0414043adb67e091d26f398bfefae608376f3eb070df6\"" Sep 9 05:28:50.153946 containerd[1591]: time="2025-09-09T05:28:50.153721861Z" level=info msg="Container adb810ccb7708aa2b1093a3bbaff065f0dbbcb2350e3fe679f59f6063dda0fc9: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:28:50.155495 containerd[1591]: time="2025-09-09T05:28:50.155414718Z" level=info msg="connecting to shim 421169b38fa5ec8a40f0414043adb67e091d26f398bfefae608376f3eb070df6" address="unix:///run/containerd/s/aee393bd43fd63f0c976e142c12a9c477ada07ac3f7a083162b657bdb7c750e6" protocol=ttrpc version=3 Sep 9 05:28:50.171053 containerd[1591]: time="2025-09-09T05:28:50.171004653Z" level=info msg="CreateContainer within sandbox \"5f48833ce43ddacbe4fcbc720462d33b2c52bc92e906c8e958ef494f3ea5e7e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"adb810ccb7708aa2b1093a3bbaff065f0dbbcb2350e3fe679f59f6063dda0fc9\"" Sep 9 05:28:50.172134 containerd[1591]: time="2025-09-09T05:28:50.172091682Z" level=info msg="StartContainer for \"adb810ccb7708aa2b1093a3bbaff065f0dbbcb2350e3fe679f59f6063dda0fc9\"" Sep 9 05:28:50.173006 containerd[1591]: time="2025-09-09T05:28:50.172969271Z" level=info msg="connecting to shim adb810ccb7708aa2b1093a3bbaff065f0dbbcb2350e3fe679f59f6063dda0fc9" address="unix:///run/containerd/s/d24e2c74108428c3ac84f04c00cc868f47e6ecbe8b265e2aafbf2a41f22534cf" protocol=ttrpc version=3 Sep 9 05:28:50.187328 systemd[1]: Started cri-containerd-421169b38fa5ec8a40f0414043adb67e091d26f398bfefae608376f3eb070df6.scope - libcontainer container 421169b38fa5ec8a40f0414043adb67e091d26f398bfefae608376f3eb070df6. Sep 9 05:28:50.191258 systemd[1]: Started cri-containerd-adb810ccb7708aa2b1093a3bbaff065f0dbbcb2350e3fe679f59f6063dda0fc9.scope - libcontainer container adb810ccb7708aa2b1093a3bbaff065f0dbbcb2350e3fe679f59f6063dda0fc9. Sep 9 05:28:50.344058 containerd[1591]: time="2025-09-09T05:28:50.343666255Z" level=info msg="StartContainer for \"adb810ccb7708aa2b1093a3bbaff065f0dbbcb2350e3fe679f59f6063dda0fc9\" returns successfully" Sep 9 05:28:50.344978 containerd[1591]: time="2025-09-09T05:28:50.344939335Z" level=info msg="StartContainer for \"421169b38fa5ec8a40f0414043adb67e091d26f398bfefae608376f3eb070df6\" returns successfully" Sep 9 05:28:50.675668 kubelet[2762]: I0909 05:28:50.675360 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lns9k" podStartSLOduration=30.675339526 podStartE2EDuration="30.675339526s" podCreationTimestamp="2025-09-09 05:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:28:50.674063801 +0000 UTC m=+36.515506571" watchObservedRunningTime="2025-09-09 05:28:50.675339526 +0000 UTC m=+36.516782286" Sep 9 05:28:50.882874 kubelet[2762]: I0909 05:28:50.882696 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-r8hzj" podStartSLOduration=30.882671437 podStartE2EDuration="30.882671437s" podCreationTimestamp="2025-09-09 05:28:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:28:50.881792678 +0000 UTC m=+36.723235438" watchObservedRunningTime="2025-09-09 05:28:50.882671437 +0000 UTC m=+36.724114197" Sep 9 05:28:54.745422 systemd[1]: Started sshd@10-10.0.0.16:22-10.0.0.1:47528.service - OpenSSH per-connection server daemon (10.0.0.1:47528). Sep 9 05:28:54.820760 sshd[4102]: Accepted publickey for core from 10.0.0.1 port 47528 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:28:54.823086 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:28:54.828895 systemd-logind[1533]: New session 11 of user core. Sep 9 05:28:54.840205 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 05:28:55.038482 sshd[4105]: Connection closed by 10.0.0.1 port 47528 Sep 9 05:28:55.038682 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Sep 9 05:28:55.043632 systemd[1]: sshd@10-10.0.0.16:22-10.0.0.1:47528.service: Deactivated successfully. Sep 9 05:28:55.046327 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 05:28:55.047233 systemd-logind[1533]: Session 11 logged out. Waiting for processes to exit. Sep 9 05:28:55.048862 systemd-logind[1533]: Removed session 11. Sep 9 05:29:00.056809 systemd[1]: Started sshd@11-10.0.0.16:22-10.0.0.1:52598.service - OpenSSH per-connection server daemon (10.0.0.1:52598). Sep 9 05:29:00.115212 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 52598 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:00.116685 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:00.121789 systemd-logind[1533]: New session 12 of user core. Sep 9 05:29:00.133119 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 05:29:00.304187 sshd[4122]: Connection closed by 10.0.0.1 port 52598 Sep 9 05:29:00.304638 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:00.310357 systemd[1]: sshd@11-10.0.0.16:22-10.0.0.1:52598.service: Deactivated successfully. Sep 9 05:29:00.313203 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 05:29:00.314196 systemd-logind[1533]: Session 12 logged out. Waiting for processes to exit. Sep 9 05:29:00.316489 systemd-logind[1533]: Removed session 12. Sep 9 05:29:05.320650 systemd[1]: Started sshd@12-10.0.0.16:22-10.0.0.1:52620.service - OpenSSH per-connection server daemon (10.0.0.1:52620). Sep 9 05:29:05.380812 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 52620 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:05.382803 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:05.389488 systemd-logind[1533]: New session 13 of user core. Sep 9 05:29:05.399103 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 05:29:05.517889 sshd[4148]: Connection closed by 10.0.0.1 port 52620 Sep 9 05:29:05.518284 sshd-session[4145]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:05.522795 systemd[1]: sshd@12-10.0.0.16:22-10.0.0.1:52620.service: Deactivated successfully. Sep 9 05:29:05.525413 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 05:29:05.526406 systemd-logind[1533]: Session 13 logged out. Waiting for processes to exit. Sep 9 05:29:05.528214 systemd-logind[1533]: Removed session 13. Sep 9 05:29:10.533051 systemd[1]: Started sshd@13-10.0.0.16:22-10.0.0.1:35698.service - OpenSSH per-connection server daemon (10.0.0.1:35698). Sep 9 05:29:10.586053 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 35698 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:10.587393 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:10.592130 systemd-logind[1533]: New session 14 of user core. Sep 9 05:29:10.607133 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 05:29:10.727560 sshd[4166]: Connection closed by 10.0.0.1 port 35698 Sep 9 05:29:10.728023 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:10.747403 systemd[1]: sshd@13-10.0.0.16:22-10.0.0.1:35698.service: Deactivated successfully. Sep 9 05:29:10.749695 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 05:29:10.750666 systemd-logind[1533]: Session 14 logged out. Waiting for processes to exit. Sep 9 05:29:10.753545 systemd[1]: Started sshd@14-10.0.0.16:22-10.0.0.1:35710.service - OpenSSH per-connection server daemon (10.0.0.1:35710). Sep 9 05:29:10.754353 systemd-logind[1533]: Removed session 14. Sep 9 05:29:10.811466 sshd[4181]: Accepted publickey for core from 10.0.0.1 port 35710 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:10.813195 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:10.817803 systemd-logind[1533]: New session 15 of user core. Sep 9 05:29:10.828077 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 05:29:10.991407 sshd[4184]: Connection closed by 10.0.0.1 port 35710 Sep 9 05:29:10.994324 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:11.005500 systemd[1]: sshd@14-10.0.0.16:22-10.0.0.1:35710.service: Deactivated successfully. Sep 9 05:29:11.008033 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 05:29:11.012047 systemd-logind[1533]: Session 15 logged out. Waiting for processes to exit. Sep 9 05:29:11.017586 systemd[1]: Started sshd@15-10.0.0.16:22-10.0.0.1:35712.service - OpenSSH per-connection server daemon (10.0.0.1:35712). Sep 9 05:29:11.018616 systemd-logind[1533]: Removed session 15. Sep 9 05:29:11.086636 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 35712 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:11.088992 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:11.095312 systemd-logind[1533]: New session 16 of user core. Sep 9 05:29:11.109286 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 05:29:11.230443 sshd[4198]: Connection closed by 10.0.0.1 port 35712 Sep 9 05:29:11.230879 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:11.236528 systemd[1]: sshd@15-10.0.0.16:22-10.0.0.1:35712.service: Deactivated successfully. Sep 9 05:29:11.239017 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 05:29:11.239945 systemd-logind[1533]: Session 16 logged out. Waiting for processes to exit. Sep 9 05:29:11.241652 systemd-logind[1533]: Removed session 16. Sep 9 05:29:16.247602 systemd[1]: Started sshd@16-10.0.0.16:22-10.0.0.1:35728.service - OpenSSH per-connection server daemon (10.0.0.1:35728). Sep 9 05:29:16.302890 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 35728 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:16.304732 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:16.309666 systemd-logind[1533]: New session 17 of user core. Sep 9 05:29:16.319060 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 05:29:16.435356 sshd[4216]: Connection closed by 10.0.0.1 port 35728 Sep 9 05:29:16.435774 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:16.440799 systemd[1]: sshd@16-10.0.0.16:22-10.0.0.1:35728.service: Deactivated successfully. Sep 9 05:29:16.443505 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 05:29:16.444480 systemd-logind[1533]: Session 17 logged out. Waiting for processes to exit. Sep 9 05:29:16.445887 systemd-logind[1533]: Removed session 17. Sep 9 05:29:21.450030 systemd[1]: Started sshd@17-10.0.0.16:22-10.0.0.1:53508.service - OpenSSH per-connection server daemon (10.0.0.1:53508). Sep 9 05:29:21.518588 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 53508 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:21.521018 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:21.526996 systemd-logind[1533]: New session 18 of user core. Sep 9 05:29:21.538228 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 05:29:21.655022 sshd[4234]: Connection closed by 10.0.0.1 port 53508 Sep 9 05:29:21.659150 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:21.664502 systemd[1]: sshd@17-10.0.0.16:22-10.0.0.1:53508.service: Deactivated successfully. Sep 9 05:29:21.667461 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 05:29:21.668545 systemd-logind[1533]: Session 18 logged out. Waiting for processes to exit. Sep 9 05:29:21.670673 systemd-logind[1533]: Removed session 18. Sep 9 05:29:26.674002 systemd[1]: Started sshd@18-10.0.0.16:22-10.0.0.1:53528.service - OpenSSH per-connection server daemon (10.0.0.1:53528). Sep 9 05:29:26.759673 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 53528 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:26.761901 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:26.768274 systemd-logind[1533]: New session 19 of user core. Sep 9 05:29:26.776312 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 05:29:26.912676 sshd[4251]: Connection closed by 10.0.0.1 port 53528 Sep 9 05:29:26.913144 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:26.923175 systemd[1]: sshd@18-10.0.0.16:22-10.0.0.1:53528.service: Deactivated successfully. Sep 9 05:29:26.925532 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 05:29:26.927010 systemd-logind[1533]: Session 19 logged out. Waiting for processes to exit. Sep 9 05:29:26.929705 systemd-logind[1533]: Removed session 19. Sep 9 05:29:26.931589 systemd[1]: Started sshd@19-10.0.0.16:22-10.0.0.1:53532.service - OpenSSH per-connection server daemon (10.0.0.1:53532). Sep 9 05:29:26.995696 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 53532 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:26.998113 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:27.004086 systemd-logind[1533]: New session 20 of user core. Sep 9 05:29:27.014082 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 05:29:27.395359 sshd[4268]: Connection closed by 10.0.0.1 port 53532 Sep 9 05:29:27.396487 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:27.415880 systemd[1]: sshd@19-10.0.0.16:22-10.0.0.1:53532.service: Deactivated successfully. Sep 9 05:29:27.418879 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 05:29:27.419996 systemd-logind[1533]: Session 20 logged out. Waiting for processes to exit. Sep 9 05:29:27.423976 systemd[1]: Started sshd@20-10.0.0.16:22-10.0.0.1:53540.service - OpenSSH per-connection server daemon (10.0.0.1:53540). Sep 9 05:29:27.424958 systemd-logind[1533]: Removed session 20. Sep 9 05:29:27.491467 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 53540 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:27.493352 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:27.499191 systemd-logind[1533]: New session 21 of user core. Sep 9 05:29:27.510139 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 05:29:29.296568 sshd[4283]: Connection closed by 10.0.0.1 port 53540 Sep 9 05:29:29.297173 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:29.308860 systemd[1]: sshd@20-10.0.0.16:22-10.0.0.1:53540.service: Deactivated successfully. Sep 9 05:29:29.311712 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 05:29:29.312844 systemd-logind[1533]: Session 21 logged out. Waiting for processes to exit. Sep 9 05:29:29.317278 systemd[1]: Started sshd@21-10.0.0.16:22-10.0.0.1:53568.service - OpenSSH per-connection server daemon (10.0.0.1:53568). Sep 9 05:29:29.318526 systemd-logind[1533]: Removed session 21. Sep 9 05:29:29.372757 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 53568 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:29.375004 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:29.380496 systemd-logind[1533]: New session 22 of user core. Sep 9 05:29:29.398352 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 05:29:30.116872 sshd[4312]: Connection closed by 10.0.0.1 port 53568 Sep 9 05:29:30.117404 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:30.128288 systemd[1]: sshd@21-10.0.0.16:22-10.0.0.1:53568.service: Deactivated successfully. Sep 9 05:29:30.130678 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 05:29:30.131723 systemd-logind[1533]: Session 22 logged out. Waiting for processes to exit. Sep 9 05:29:30.135547 systemd[1]: Started sshd@22-10.0.0.16:22-10.0.0.1:59002.service - OpenSSH per-connection server daemon (10.0.0.1:59002). Sep 9 05:29:30.136481 systemd-logind[1533]: Removed session 22. Sep 9 05:29:30.202037 sshd[4324]: Accepted publickey for core from 10.0.0.1 port 59002 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:30.203793 sshd-session[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:30.209546 systemd-logind[1533]: New session 23 of user core. Sep 9 05:29:30.216150 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 05:29:30.343844 sshd[4327]: Connection closed by 10.0.0.1 port 59002 Sep 9 05:29:30.344309 sshd-session[4324]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:30.349225 systemd[1]: sshd@22-10.0.0.16:22-10.0.0.1:59002.service: Deactivated successfully. Sep 9 05:29:30.352285 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 05:29:30.353326 systemd-logind[1533]: Session 23 logged out. Waiting for processes to exit. Sep 9 05:29:30.354785 systemd-logind[1533]: Removed session 23. Sep 9 05:29:35.365019 systemd[1]: Started sshd@23-10.0.0.16:22-10.0.0.1:59016.service - OpenSSH per-connection server daemon (10.0.0.1:59016). Sep 9 05:29:35.420673 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 59016 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:35.422704 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:35.427624 systemd-logind[1533]: New session 24 of user core. Sep 9 05:29:35.438121 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 05:29:35.550646 sshd[4344]: Connection closed by 10.0.0.1 port 59016 Sep 9 05:29:35.551054 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:35.556686 systemd[1]: sshd@23-10.0.0.16:22-10.0.0.1:59016.service: Deactivated successfully. Sep 9 05:29:35.559323 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 05:29:35.560501 systemd-logind[1533]: Session 24 logged out. Waiting for processes to exit. Sep 9 05:29:35.561874 systemd-logind[1533]: Removed session 24. Sep 9 05:29:40.562395 systemd[1]: Started sshd@24-10.0.0.16:22-10.0.0.1:58072.service - OpenSSH per-connection server daemon (10.0.0.1:58072). Sep 9 05:29:40.620904 sshd[4361]: Accepted publickey for core from 10.0.0.1 port 58072 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:40.622594 sshd-session[4361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:40.628595 systemd-logind[1533]: New session 25 of user core. Sep 9 05:29:40.638235 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 05:29:40.805012 sshd[4364]: Connection closed by 10.0.0.1 port 58072 Sep 9 05:29:40.805534 sshd-session[4361]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:40.811292 systemd[1]: sshd@24-10.0.0.16:22-10.0.0.1:58072.service: Deactivated successfully. Sep 9 05:29:40.814313 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 05:29:40.815442 systemd-logind[1533]: Session 25 logged out. Waiting for processes to exit. Sep 9 05:29:40.816843 systemd-logind[1533]: Removed session 25. Sep 9 05:29:45.825959 systemd[1]: Started sshd@25-10.0.0.16:22-10.0.0.1:58096.service - OpenSSH per-connection server daemon (10.0.0.1:58096). Sep 9 05:29:45.897810 sshd[4378]: Accepted publickey for core from 10.0.0.1 port 58096 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:45.899707 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:45.905819 systemd-logind[1533]: New session 26 of user core. Sep 9 05:29:45.916352 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 05:29:46.034212 sshd[4381]: Connection closed by 10.0.0.1 port 58096 Sep 9 05:29:46.034663 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:46.039552 systemd[1]: sshd@25-10.0.0.16:22-10.0.0.1:58096.service: Deactivated successfully. Sep 9 05:29:46.042149 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 05:29:46.043046 systemd-logind[1533]: Session 26 logged out. Waiting for processes to exit. Sep 9 05:29:46.044303 systemd-logind[1533]: Removed session 26. Sep 9 05:29:51.054431 systemd[1]: Started sshd@26-10.0.0.16:22-10.0.0.1:42010.service - OpenSSH per-connection server daemon (10.0.0.1:42010). Sep 9 05:29:51.125674 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 42010 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:51.127751 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:51.133715 systemd-logind[1533]: New session 27 of user core. Sep 9 05:29:51.144193 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 05:29:51.280698 sshd[4397]: Connection closed by 10.0.0.1 port 42010 Sep 9 05:29:51.281190 sshd-session[4394]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:51.292142 systemd[1]: sshd@26-10.0.0.16:22-10.0.0.1:42010.service: Deactivated successfully. Sep 9 05:29:51.295347 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 05:29:51.298088 systemd-logind[1533]: Session 27 logged out. Waiting for processes to exit. Sep 9 05:29:51.302422 systemd[1]: Started sshd@27-10.0.0.16:22-10.0.0.1:42024.service - OpenSSH per-connection server daemon (10.0.0.1:42024). Sep 9 05:29:51.303329 systemd-logind[1533]: Removed session 27. Sep 9 05:29:51.370585 sshd[4410]: Accepted publickey for core from 10.0.0.1 port 42024 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:51.372417 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:51.380035 systemd-logind[1533]: New session 28 of user core. Sep 9 05:29:51.390266 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 05:29:53.930005 containerd[1591]: time="2025-09-09T05:29:53.929884067Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 05:29:53.942118 containerd[1591]: time="2025-09-09T05:29:53.942047009Z" level=info msg="StopContainer for \"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\" with timeout 30 (s)" Sep 9 05:29:53.944586 containerd[1591]: time="2025-09-09T05:29:53.944540296Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" id:\"711133f71436dbd7dad19723f37aa93bbdfe04b6c7bd862445566f7fa3dd0a08\" pid:4436 exited_at:{seconds:1757395793 nanos:944152753}" Sep 9 05:29:53.946669 containerd[1591]: time="2025-09-09T05:29:53.946636382Z" level=info msg="StopContainer for \"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" with timeout 2 (s)" Sep 9 05:29:53.947007 containerd[1591]: time="2025-09-09T05:29:53.946953251Z" level=info msg="Stop container \"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" with signal terminated" Sep 9 05:29:53.950521 containerd[1591]: time="2025-09-09T05:29:53.950487719Z" level=info msg="Stop container \"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\" with signal terminated" Sep 9 05:29:53.956161 systemd-networkd[1472]: lxc_health: Link DOWN Sep 9 05:29:53.956172 systemd-networkd[1472]: lxc_health: Lost carrier Sep 9 05:29:53.970348 systemd[1]: cri-containerd-20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813.scope: Deactivated successfully. Sep 9 05:29:53.972278 containerd[1591]: time="2025-09-09T05:29:53.972230386Z" level=info msg="received exit event container_id:\"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\" id:\"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\" pid:3331 exited_at:{seconds:1757395793 nanos:971752041}" Sep 9 05:29:53.972410 containerd[1591]: time="2025-09-09T05:29:53.972339893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\" id:\"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\" pid:3331 exited_at:{seconds:1757395793 nanos:971752041}" Sep 9 05:29:53.980093 systemd[1]: cri-containerd-89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4.scope: Deactivated successfully. Sep 9 05:29:53.980494 systemd[1]: cri-containerd-89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4.scope: Consumed 7.917s CPU time, 127.7M memory peak, 284K read from disk, 13.3M written to disk. Sep 9 05:29:53.980831 containerd[1591]: time="2025-09-09T05:29:53.980797514Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" id:\"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" pid:3403 exited_at:{seconds:1757395793 nanos:980236383}" Sep 9 05:29:53.981117 containerd[1591]: time="2025-09-09T05:29:53.980932591Z" level=info msg="received exit event container_id:\"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" id:\"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" pid:3403 exited_at:{seconds:1757395793 nanos:980236383}" Sep 9 05:29:54.000235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813-rootfs.mount: Deactivated successfully. Sep 9 05:29:54.009091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4-rootfs.mount: Deactivated successfully. Sep 9 05:29:54.437512 kubelet[2762]: E0909 05:29:54.437429 2762 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:29:54.498290 containerd[1591]: time="2025-09-09T05:29:54.498208846Z" level=info msg="StopContainer for \"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\" returns successfully" Sep 9 05:29:54.501360 containerd[1591]: time="2025-09-09T05:29:54.501301297Z" level=info msg="StopPodSandbox for \"07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf\"" Sep 9 05:29:54.516641 containerd[1591]: time="2025-09-09T05:29:54.516578777Z" level=info msg="StopContainer for \"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" returns successfully" Sep 9 05:29:54.517221 containerd[1591]: time="2025-09-09T05:29:54.517179073Z" level=info msg="StopPodSandbox for \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\"" Sep 9 05:29:54.517927 containerd[1591]: time="2025-09-09T05:29:54.517871352Z" level=info msg="Container to stop \"61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:29:54.517927 containerd[1591]: time="2025-09-09T05:29:54.517901459Z" level=info msg="Container to stop \"1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:29:54.518044 containerd[1591]: time="2025-09-09T05:29:54.517940312Z" level=info msg="Container to stop \"b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:29:54.518044 containerd[1591]: time="2025-09-09T05:29:54.517964027Z" level=info msg="Container to stop \"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:29:54.518044 containerd[1591]: time="2025-09-09T05:29:54.517976501Z" level=info msg="Container to stop \"dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:29:54.520647 containerd[1591]: time="2025-09-09T05:29:54.520584454Z" level=info msg="Container to stop \"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 05:29:54.525506 systemd[1]: cri-containerd-537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597.scope: Deactivated successfully. Sep 9 05:29:54.527848 systemd[1]: cri-containerd-07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf.scope: Deactivated successfully. Sep 9 05:29:54.529166 containerd[1591]: time="2025-09-09T05:29:54.529130461Z" level=info msg="TaskExit event in podsandbox handler container_id:\"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" id:\"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" pid:2913 exit_status:137 exited_at:{seconds:1757395794 nanos:528643449}" Sep 9 05:29:54.557775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597-rootfs.mount: Deactivated successfully. Sep 9 05:29:54.560423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf-rootfs.mount: Deactivated successfully. Sep 9 05:29:54.860658 containerd[1591]: time="2025-09-09T05:29:54.860536851Z" level=info msg="shim disconnected" id=537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597 namespace=k8s.io Sep 9 05:29:54.860658 containerd[1591]: time="2025-09-09T05:29:54.860578300Z" level=warning msg="cleaning up after shim disconnected" id=537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597 namespace=k8s.io Sep 9 05:29:54.866637 sshd[4414]: Connection closed by 10.0.0.1 port 42024 Sep 9 05:29:54.867980 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:54.876374 systemd[1]: sshd@27-10.0.0.16:22-10.0.0.1:42024.service: Deactivated successfully. Sep 9 05:29:54.879428 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 05:29:54.881060 systemd-logind[1533]: Session 28 logged out. Waiting for processes to exit. Sep 9 05:29:54.885881 systemd[1]: Started sshd@28-10.0.0.16:22-10.0.0.1:42030.service - OpenSSH per-connection server daemon (10.0.0.1:42030). Sep 9 05:29:54.888059 systemd-logind[1533]: Removed session 28. Sep 9 05:29:54.888249 containerd[1591]: time="2025-09-09T05:29:54.860586556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:29:54.889940 containerd[1591]: time="2025-09-09T05:29:54.860616773Z" level=info msg="shim disconnected" id=07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf namespace=k8s.io Sep 9 05:29:54.889940 containerd[1591]: time="2025-09-09T05:29:54.888547249Z" level=warning msg="cleaning up after shim disconnected" id=07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf namespace=k8s.io Sep 9 05:29:54.889940 containerd[1591]: time="2025-09-09T05:29:54.888559071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 05:29:54.919016 containerd[1591]: time="2025-09-09T05:29:54.918670322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf\" id:\"07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf\" pid:2984 exit_status:137 exited_at:{seconds:1757395794 nanos:533456745}" Sep 9 05:29:54.920969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf-shm.mount: Deactivated successfully. Sep 9 05:29:54.921111 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597-shm.mount: Deactivated successfully. Sep 9 05:29:54.921195 containerd[1591]: time="2025-09-09T05:29:54.921125437Z" level=info msg="TearDown network for sandbox \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" successfully" Sep 9 05:29:54.921195 containerd[1591]: time="2025-09-09T05:29:54.921163739Z" level=info msg="StopPodSandbox for \"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" returns successfully" Sep 9 05:29:54.933454 containerd[1591]: time="2025-09-09T05:29:54.933382644Z" level=info msg="received exit event sandbox_id:\"07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf\" exit_status:137 exited_at:{seconds:1757395794 nanos:533456745}" Sep 9 05:29:54.934104 containerd[1591]: time="2025-09-09T05:29:54.933898399Z" level=info msg="received exit event sandbox_id:\"537ea36fc350ec35003b1a75493ced1ad34badf1f0d2e4a4786ba9702641d597\" exit_status:137 exited_at:{seconds:1757395794 nanos:528643449}" Sep 9 05:29:54.936132 containerd[1591]: time="2025-09-09T05:29:54.935999434Z" level=info msg="TearDown network for sandbox \"07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf\" successfully" Sep 9 05:29:54.936132 containerd[1591]: time="2025-09-09T05:29:54.936040643Z" level=info msg="StopPodSandbox for \"07a3588b75bf41a21edc8f12994be2acae626f7f44190ac386233e8ea5f62dcf\" returns successfully" Sep 9 05:29:54.942262 sshd[4537]: Accepted publickey for core from 10.0.0.1 port 42030 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:54.944636 sshd-session[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:54.950254 systemd-logind[1533]: New session 29 of user core. Sep 9 05:29:54.982252 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 05:29:55.051764 kubelet[2762]: I0909 05:29:55.051697 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-hubble-tls\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.051764 kubelet[2762]: I0909 05:29:55.051741 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-host-proc-sys-kernel\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.051764 kubelet[2762]: I0909 05:29:55.051761 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfv8q\" (UniqueName: \"kubernetes.io/projected/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-kube-api-access-pfv8q\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.051764 kubelet[2762]: I0909 05:29:55.051775 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-hostproc\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.051764 kubelet[2762]: I0909 05:29:55.051791 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/caa17d5a-131a-435e-b65a-a2953a95fa45-cilium-config-path\") pod \"caa17d5a-131a-435e-b65a-a2953a95fa45\" (UID: \"caa17d5a-131a-435e-b65a-a2953a95fa45\") " Sep 9 05:29:55.052198 kubelet[2762]: I0909 05:29:55.051808 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-xtables-lock\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.052198 kubelet[2762]: I0909 05:29:55.051821 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-cgroup\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.052198 kubelet[2762]: I0909 05:29:55.051834 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-bpf-maps\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.052198 kubelet[2762]: I0909 05:29:55.051852 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxllv\" (UniqueName: \"kubernetes.io/projected/caa17d5a-131a-435e-b65a-a2953a95fa45-kube-api-access-rxllv\") pod \"caa17d5a-131a-435e-b65a-a2953a95fa45\" (UID: \"caa17d5a-131a-435e-b65a-a2953a95fa45\") " Sep 9 05:29:55.052198 kubelet[2762]: I0909 05:29:55.051868 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-config-path\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.052198 kubelet[2762]: I0909 05:29:55.051882 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-host-proc-sys-net\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.052426 kubelet[2762]: I0909 05:29:55.051894 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-etc-cni-netd\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.052426 kubelet[2762]: I0909 05:29:55.051908 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-lib-modules\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.052426 kubelet[2762]: I0909 05:29:55.051967 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-run\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.052426 kubelet[2762]: I0909 05:29:55.051981 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cni-path\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.052426 kubelet[2762]: I0909 05:29:55.051996 2762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-clustermesh-secrets\") pod \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\" (UID: \"0f22bafb-3aa7-4389-9029-67b34ad5fcd5\") " Sep 9 05:29:55.052426 kubelet[2762]: I0909 05:29:55.052167 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:29:55.055641 kubelet[2762]: I0909 05:29:55.055595 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 05:29:55.055641 kubelet[2762]: I0909 05:29:55.055635 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:29:55.055738 kubelet[2762]: I0909 05:29:55.055652 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:29:55.055738 kubelet[2762]: I0909 05:29:55.055666 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:29:55.055738 kubelet[2762]: I0909 05:29:55.055679 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:29:55.055738 kubelet[2762]: I0909 05:29:55.055691 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cni-path" (OuterVolumeSpecName: "cni-path") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:29:55.074941 kubelet[2762]: I0909 05:29:55.074486 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-hostproc" (OuterVolumeSpecName: "hostproc") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:29:55.077513 kubelet[2762]: I0909 05:29:55.077469 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-kube-api-access-pfv8q" (OuterVolumeSpecName: "kube-api-access-pfv8q") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "kube-api-access-pfv8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 05:29:55.077889 kubelet[2762]: I0909 05:29:55.077851 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caa17d5a-131a-435e-b65a-a2953a95fa45-kube-api-access-rxllv" (OuterVolumeSpecName: "kube-api-access-rxllv") pod "caa17d5a-131a-435e-b65a-a2953a95fa45" (UID: "caa17d5a-131a-435e-b65a-a2953a95fa45"). InnerVolumeSpecName "kube-api-access-rxllv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 05:29:55.078009 kubelet[2762]: I0909 05:29:55.077900 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:29:55.078009 kubelet[2762]: I0909 05:29:55.077942 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:29:55.078009 kubelet[2762]: I0909 05:29:55.077967 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 05:29:55.078446 kubelet[2762]: I0909 05:29:55.078417 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 05:29:55.079037 kubelet[2762]: I0909 05:29:55.078994 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caa17d5a-131a-435e-b65a-a2953a95fa45-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "caa17d5a-131a-435e-b65a-a2953a95fa45" (UID: "caa17d5a-131a-435e-b65a-a2953a95fa45"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 05:29:55.079172 kubelet[2762]: I0909 05:29:55.079030 2762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0f22bafb-3aa7-4389-9029-67b34ad5fcd5" (UID: "0f22bafb-3aa7-4389-9029-67b34ad5fcd5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 05:29:55.079524 systemd[1]: var-lib-kubelet-pods-0f22bafb\x2d3aa7\x2d4389\x2d9029\x2d67b34ad5fcd5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpfv8q.mount: Deactivated successfully. Sep 9 05:29:55.079641 systemd[1]: var-lib-kubelet-pods-0f22bafb\x2d3aa7\x2d4389\x2d9029\x2d67b34ad5fcd5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 05:29:55.082646 systemd[1]: var-lib-kubelet-pods-caa17d5a\x2d131a\x2d435e\x2db65a\x2da2953a95fa45-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drxllv.mount: Deactivated successfully. Sep 9 05:29:55.082747 systemd[1]: var-lib-kubelet-pods-0f22bafb\x2d3aa7\x2d4389\x2d9029\x2d67b34ad5fcd5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 05:29:55.153042 kubelet[2762]: I0909 05:29:55.152855 2762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfv8q\" (UniqueName: \"kubernetes.io/projected/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-kube-api-access-pfv8q\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153042 kubelet[2762]: I0909 05:29:55.152904 2762 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153042 kubelet[2762]: I0909 05:29:55.152931 2762 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/caa17d5a-131a-435e-b65a-a2953a95fa45-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153042 kubelet[2762]: I0909 05:29:55.152942 2762 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153042 kubelet[2762]: I0909 05:29:55.152960 2762 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153042 kubelet[2762]: I0909 05:29:55.152967 2762 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153042 kubelet[2762]: I0909 05:29:55.152983 2762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxllv\" (UniqueName: \"kubernetes.io/projected/caa17d5a-131a-435e-b65a-a2953a95fa45-kube-api-access-rxllv\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153042 kubelet[2762]: I0909 05:29:55.152991 2762 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153362 kubelet[2762]: I0909 05:29:55.152999 2762 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153362 kubelet[2762]: I0909 05:29:55.153007 2762 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153362 kubelet[2762]: I0909 05:29:55.153017 2762 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153362 kubelet[2762]: I0909 05:29:55.153028 2762 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153362 kubelet[2762]: I0909 05:29:55.153035 2762 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153362 kubelet[2762]: I0909 05:29:55.153043 2762 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153362 kubelet[2762]: I0909 05:29:55.153053 2762 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.153362 kubelet[2762]: I0909 05:29:55.153063 2762 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f22bafb-3aa7-4389-9029-67b34ad5fcd5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 05:29:55.625732 kubelet[2762]: I0909 05:29:55.624653 2762 scope.go:117] "RemoveContainer" containerID="20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813" Sep 9 05:29:55.627239 containerd[1591]: time="2025-09-09T05:29:55.627198975Z" level=info msg="RemoveContainer for \"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\"" Sep 9 05:29:55.632425 systemd[1]: Removed slice kubepods-besteffort-podcaa17d5a_131a_435e_b65a_a2953a95fa45.slice - libcontainer container kubepods-besteffort-podcaa17d5a_131a_435e_b65a_a2953a95fa45.slice. Sep 9 05:29:55.636998 systemd[1]: Removed slice kubepods-burstable-pod0f22bafb_3aa7_4389_9029_67b34ad5fcd5.slice - libcontainer container kubepods-burstable-pod0f22bafb_3aa7_4389_9029_67b34ad5fcd5.slice. Sep 9 05:29:55.637130 systemd[1]: kubepods-burstable-pod0f22bafb_3aa7_4389_9029_67b34ad5fcd5.slice: Consumed 8.054s CPU time, 128.1M memory peak, 296K read from disk, 13.3M written to disk. Sep 9 05:29:55.720115 containerd[1591]: time="2025-09-09T05:29:55.720029756Z" level=info msg="RemoveContainer for \"20aef171c98580fe43ae17919eb253ab25bcd7deb75a6661330e3f7678f68813\" returns successfully" Sep 9 05:29:55.720540 kubelet[2762]: I0909 05:29:55.720469 2762 scope.go:117] "RemoveContainer" containerID="89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4" Sep 9 05:29:55.723976 containerd[1591]: time="2025-09-09T05:29:55.723893354Z" level=info msg="RemoveContainer for \"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\"" Sep 9 05:29:55.754710 containerd[1591]: time="2025-09-09T05:29:55.754625060Z" level=info msg="RemoveContainer for \"89584a8f3865c17ab309f6ddfff246469cf81e0ee9b2f1dbb14da7d049042bc4\" returns successfully" Sep 9 05:29:55.755093 kubelet[2762]: I0909 05:29:55.755034 2762 scope.go:117] "RemoveContainer" containerID="61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567" Sep 9 05:29:55.756397 containerd[1591]: time="2025-09-09T05:29:55.756372095Z" level=info msg="RemoveContainer for \"61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567\"" Sep 9 05:29:55.783242 containerd[1591]: time="2025-09-09T05:29:55.783185880Z" level=info msg="RemoveContainer for \"61d1e699bbee16cd0b330a10df6479cbb3236b66fa1ec63c539adc5dea3b6567\" returns successfully" Sep 9 05:29:55.783573 kubelet[2762]: I0909 05:29:55.783520 2762 scope.go:117] "RemoveContainer" containerID="dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9" Sep 9 05:29:55.786448 containerd[1591]: time="2025-09-09T05:29:55.786400490Z" level=info msg="RemoveContainer for \"dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9\"" Sep 9 05:29:55.827370 containerd[1591]: time="2025-09-09T05:29:55.827301942Z" level=info msg="RemoveContainer for \"dae42a85566a2396f6e91fd2b5d20dd2ce8c63a2cced8413fdb67c821d5891f9\" returns successfully" Sep 9 05:29:55.827685 kubelet[2762]: I0909 05:29:55.827637 2762 scope.go:117] "RemoveContainer" containerID="b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a" Sep 9 05:29:55.829426 containerd[1591]: time="2025-09-09T05:29:55.829391254Z" level=info msg="RemoveContainer for \"b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a\"" Sep 9 05:29:55.877128 containerd[1591]: time="2025-09-09T05:29:55.876947081Z" level=info msg="RemoveContainer for \"b8be874bda79b74bd62d95e5f7862647b921f71091c1524d849732cf3e90924a\" returns successfully" Sep 9 05:29:55.877290 kubelet[2762]: I0909 05:29:55.877227 2762 scope.go:117] "RemoveContainer" containerID="1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123" Sep 9 05:29:55.880243 containerd[1591]: time="2025-09-09T05:29:55.879639133Z" level=info msg="RemoveContainer for \"1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123\"" Sep 9 05:29:55.932016 containerd[1591]: time="2025-09-09T05:29:55.931891043Z" level=info msg="RemoveContainer for \"1f651df15433ec026ff859bb3e1cc783db99b4654484bb3acc1c0d520fc7f123\" returns successfully" Sep 9 05:29:56.248154 sshd[4568]: Connection closed by 10.0.0.1 port 42030 Sep 9 05:29:56.248538 sshd-session[4537]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:56.262576 systemd[1]: sshd@28-10.0.0.16:22-10.0.0.1:42030.service: Deactivated successfully. Sep 9 05:29:56.265236 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 05:29:56.268352 systemd-logind[1533]: Session 29 logged out. Waiting for processes to exit. Sep 9 05:29:56.273172 systemd[1]: Started sshd@29-10.0.0.16:22-10.0.0.1:42032.service - OpenSSH per-connection server daemon (10.0.0.1:42032). Sep 9 05:29:56.274050 systemd-logind[1533]: Removed session 29. Sep 9 05:29:56.307407 kubelet[2762]: I0909 05:29:56.307342 2762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f22bafb-3aa7-4389-9029-67b34ad5fcd5" path="/var/lib/kubelet/pods/0f22bafb-3aa7-4389-9029-67b34ad5fcd5/volumes" Sep 9 05:29:56.308316 kubelet[2762]: I0909 05:29:56.308286 2762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caa17d5a-131a-435e-b65a-a2953a95fa45" path="/var/lib/kubelet/pods/caa17d5a-131a-435e-b65a-a2953a95fa45/volumes" Sep 9 05:29:56.338101 sshd[4584]: Accepted publickey for core from 10.0.0.1 port 42032 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:56.339831 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:56.345597 systemd-logind[1533]: New session 30 of user core. Sep 9 05:29:56.353107 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 05:29:56.417810 sshd[4587]: Connection closed by 10.0.0.1 port 42032 Sep 9 05:29:56.418494 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Sep 9 05:29:56.428849 kubelet[2762]: E0909 05:29:56.428768 2762 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f22bafb-3aa7-4389-9029-67b34ad5fcd5" containerName="mount-cgroup" Sep 9 05:29:56.429709 kubelet[2762]: E0909 05:29:56.429481 2762 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f22bafb-3aa7-4389-9029-67b34ad5fcd5" containerName="mount-bpf-fs" Sep 9 05:29:56.429709 kubelet[2762]: E0909 05:29:56.429503 2762 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="caa17d5a-131a-435e-b65a-a2953a95fa45" containerName="cilium-operator" Sep 9 05:29:56.429709 kubelet[2762]: E0909 05:29:56.429513 2762 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f22bafb-3aa7-4389-9029-67b34ad5fcd5" containerName="cilium-agent" Sep 9 05:29:56.429709 kubelet[2762]: E0909 05:29:56.429525 2762 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f22bafb-3aa7-4389-9029-67b34ad5fcd5" containerName="apply-sysctl-overwrites" Sep 9 05:29:56.429709 kubelet[2762]: E0909 05:29:56.429555 2762 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f22bafb-3aa7-4389-9029-67b34ad5fcd5" containerName="clean-cilium-state" Sep 9 05:29:56.429709 kubelet[2762]: I0909 05:29:56.429633 2762 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f22bafb-3aa7-4389-9029-67b34ad5fcd5" containerName="cilium-agent" Sep 9 05:29:56.429709 kubelet[2762]: I0909 05:29:56.429643 2762 memory_manager.go:354] "RemoveStaleState removing state" podUID="caa17d5a-131a-435e-b65a-a2953a95fa45" containerName="cilium-operator" Sep 9 05:29:56.433390 systemd[1]: sshd@29-10.0.0.16:22-10.0.0.1:42032.service: Deactivated successfully. Sep 9 05:29:56.438389 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 05:29:56.441341 systemd-logind[1533]: Session 30 logged out. Waiting for processes to exit. Sep 9 05:29:56.445506 systemd[1]: Started sshd@30-10.0.0.16:22-10.0.0.1:42036.service - OpenSSH per-connection server daemon (10.0.0.1:42036). Sep 9 05:29:56.446613 systemd-logind[1533]: Removed session 30. Sep 9 05:29:56.461032 systemd[1]: Created slice kubepods-burstable-podbc72c973_7481_4539_bcff_b37df91c0d3f.slice - libcontainer container kubepods-burstable-podbc72c973_7481_4539_bcff_b37df91c0d3f.slice. Sep 9 05:29:56.462478 kubelet[2762]: I0909 05:29:56.462438 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc72c973-7481-4539-bcff-b37df91c0d3f-cni-path\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.462478 kubelet[2762]: I0909 05:29:56.462479 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc72c973-7481-4539-bcff-b37df91c0d3f-host-proc-sys-kernel\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.462478 kubelet[2762]: I0909 05:29:56.462504 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc72c973-7481-4539-bcff-b37df91c0d3f-bpf-maps\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.462478 kubelet[2762]: I0909 05:29:56.462526 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bc72c973-7481-4539-bcff-b37df91c0d3f-cilium-ipsec-secrets\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.462478 kubelet[2762]: I0909 05:29:56.462547 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc72c973-7481-4539-bcff-b37df91c0d3f-host-proc-sys-net\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.462478 kubelet[2762]: I0909 05:29:56.462564 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc72c973-7481-4539-bcff-b37df91c0d3f-cilium-run\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.463384 kubelet[2762]: I0909 05:29:56.462584 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc72c973-7481-4539-bcff-b37df91c0d3f-lib-modules\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.463384 kubelet[2762]: I0909 05:29:56.462603 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc72c973-7481-4539-bcff-b37df91c0d3f-hubble-tls\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.463384 kubelet[2762]: I0909 05:29:56.462622 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc72c973-7481-4539-bcff-b37df91c0d3f-clustermesh-secrets\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.463384 kubelet[2762]: I0909 05:29:56.462642 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz49h\" (UniqueName: \"kubernetes.io/projected/bc72c973-7481-4539-bcff-b37df91c0d3f-kube-api-access-tz49h\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.463384 kubelet[2762]: I0909 05:29:56.462662 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc72c973-7481-4539-bcff-b37df91c0d3f-xtables-lock\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.463384 kubelet[2762]: I0909 05:29:56.462680 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc72c973-7481-4539-bcff-b37df91c0d3f-hostproc\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.463543 kubelet[2762]: I0909 05:29:56.462697 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc72c973-7481-4539-bcff-b37df91c0d3f-cilium-config-path\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.463543 kubelet[2762]: I0909 05:29:56.462715 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc72c973-7481-4539-bcff-b37df91c0d3f-etc-cni-netd\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.463543 kubelet[2762]: I0909 05:29:56.462735 2762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc72c973-7481-4539-bcff-b37df91c0d3f-cilium-cgroup\") pod \"cilium-llqw4\" (UID: \"bc72c973-7481-4539-bcff-b37df91c0d3f\") " pod="kube-system/cilium-llqw4" Sep 9 05:29:56.506066 sshd[4594]: Accepted publickey for core from 10.0.0.1 port 42036 ssh2: RSA SHA256:7F/y7C4gusWo4gyqUKS6/QkQLBnINJ/p9+95m14vjQE Sep 9 05:29:56.508152 sshd-session[4594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 05:29:56.513572 systemd-logind[1533]: New session 31 of user core. Sep 9 05:29:56.519122 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 9 05:29:57.075523 containerd[1591]: time="2025-09-09T05:29:57.075455076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-llqw4,Uid:bc72c973-7481-4539-bcff-b37df91c0d3f,Namespace:kube-system,Attempt:0,}" Sep 9 05:29:57.310356 containerd[1591]: time="2025-09-09T05:29:57.310281409Z" level=info msg="connecting to shim f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097" address="unix:///run/containerd/s/ad0527ac0a83718fa5d8731ad4520c7332dd4b5d3de6959d4ddf710854cc8098" namespace=k8s.io protocol=ttrpc version=3 Sep 9 05:29:57.342242 systemd[1]: Started cri-containerd-f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097.scope - libcontainer container f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097. Sep 9 05:29:57.365891 kubelet[2762]: I0909 05:29:57.365022 2762 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T05:29:57Z","lastTransitionTime":"2025-09-09T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 05:29:57.390270 containerd[1591]: time="2025-09-09T05:29:57.390225874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-llqw4,Uid:bc72c973-7481-4539-bcff-b37df91c0d3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\"" Sep 9 05:29:57.392805 containerd[1591]: time="2025-09-09T05:29:57.392766918Z" level=info msg="CreateContainer within sandbox \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 05:29:57.460473 containerd[1591]: time="2025-09-09T05:29:57.460400635Z" level=info msg="Container 0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:29:57.735157 containerd[1591]: time="2025-09-09T05:29:57.735079434Z" level=info msg="CreateContainer within sandbox \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4\"" Sep 9 05:29:57.735972 containerd[1591]: time="2025-09-09T05:29:57.735768828Z" level=info msg="StartContainer for \"0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4\"" Sep 9 05:29:57.737052 containerd[1591]: time="2025-09-09T05:29:57.736991069Z" level=info msg="connecting to shim 0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4" address="unix:///run/containerd/s/ad0527ac0a83718fa5d8731ad4520c7332dd4b5d3de6959d4ddf710854cc8098" protocol=ttrpc version=3 Sep 9 05:29:57.761118 systemd[1]: Started cri-containerd-0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4.scope - libcontainer container 0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4. Sep 9 05:29:57.863077 containerd[1591]: time="2025-09-09T05:29:57.863007278Z" level=info msg="StartContainer for \"0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4\" returns successfully" Sep 9 05:29:57.871230 systemd[1]: cri-containerd-0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4.scope: Deactivated successfully. Sep 9 05:29:57.874324 containerd[1591]: time="2025-09-09T05:29:57.874271237Z" level=info msg="received exit event container_id:\"0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4\" id:\"0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4\" pid:4668 exited_at:{seconds:1757395797 nanos:873947044}" Sep 9 05:29:57.874459 containerd[1591]: time="2025-09-09T05:29:57.874362329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4\" id:\"0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4\" pid:4668 exited_at:{seconds:1757395797 nanos:873947044}" Sep 9 05:29:57.896615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fb47780e01e00b138df27dda69004876a2a778abace6c79c1f44834c2f3ebc4-rootfs.mount: Deactivated successfully. Sep 9 05:29:58.644934 containerd[1591]: time="2025-09-09T05:29:58.644812467Z" level=info msg="CreateContainer within sandbox \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 05:29:58.677400 containerd[1591]: time="2025-09-09T05:29:58.677341826Z" level=info msg="Container 1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:29:58.688669 containerd[1591]: time="2025-09-09T05:29:58.688610120Z" level=info msg="CreateContainer within sandbox \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68\"" Sep 9 05:29:58.689301 containerd[1591]: time="2025-09-09T05:29:58.689231406Z" level=info msg="StartContainer for \"1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68\"" Sep 9 05:29:58.690401 containerd[1591]: time="2025-09-09T05:29:58.690355811Z" level=info msg="connecting to shim 1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68" address="unix:///run/containerd/s/ad0527ac0a83718fa5d8731ad4520c7332dd4b5d3de6959d4ddf710854cc8098" protocol=ttrpc version=3 Sep 9 05:29:58.723206 systemd[1]: Started cri-containerd-1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68.scope - libcontainer container 1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68. Sep 9 05:29:58.770323 containerd[1591]: time="2025-09-09T05:29:58.770226645Z" level=info msg="StartContainer for \"1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68\" returns successfully" Sep 9 05:29:58.781662 systemd[1]: cri-containerd-1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68.scope: Deactivated successfully. Sep 9 05:29:58.783509 containerd[1591]: time="2025-09-09T05:29:58.783444895Z" level=info msg="received exit event container_id:\"1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68\" id:\"1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68\" pid:4712 exited_at:{seconds:1757395798 nanos:783015885}" Sep 9 05:29:58.783759 containerd[1591]: time="2025-09-09T05:29:58.783718443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68\" id:\"1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68\" pid:4712 exited_at:{seconds:1757395798 nanos:783015885}" Sep 9 05:29:58.814833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c02914e8c9f43c434af5bd2c776b1b0ed47eefcc3a5a16e76615fea240bac68-rootfs.mount: Deactivated successfully. Sep 9 05:29:59.439393 kubelet[2762]: E0909 05:29:59.439311 2762 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 05:29:59.648524 containerd[1591]: time="2025-09-09T05:29:59.648442988Z" level=info msg="CreateContainer within sandbox \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 05:29:59.705043 containerd[1591]: time="2025-09-09T05:29:59.704954765Z" level=info msg="Container d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:29:59.727326 containerd[1591]: time="2025-09-09T05:29:59.727269327Z" level=info msg="CreateContainer within sandbox \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780\"" Sep 9 05:29:59.728109 containerd[1591]: time="2025-09-09T05:29:59.728073618Z" level=info msg="StartContainer for \"d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780\"" Sep 9 05:29:59.730062 containerd[1591]: time="2025-09-09T05:29:59.730033172Z" level=info msg="connecting to shim d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780" address="unix:///run/containerd/s/ad0527ac0a83718fa5d8731ad4520c7332dd4b5d3de6959d4ddf710854cc8098" protocol=ttrpc version=3 Sep 9 05:29:59.762184 systemd[1]: Started cri-containerd-d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780.scope - libcontainer container d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780. Sep 9 05:29:59.811221 systemd[1]: cri-containerd-d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780.scope: Deactivated successfully. Sep 9 05:29:59.812628 containerd[1591]: time="2025-09-09T05:29:59.812576458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780\" id:\"d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780\" pid:4756 exited_at:{seconds:1757395799 nanos:812052216}" Sep 9 05:29:59.813492 containerd[1591]: time="2025-09-09T05:29:59.813439449Z" level=info msg="received exit event container_id:\"d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780\" id:\"d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780\" pid:4756 exited_at:{seconds:1757395799 nanos:812052216}" Sep 9 05:29:59.825137 containerd[1591]: time="2025-09-09T05:29:59.825078681Z" level=info msg="StartContainer for \"d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780\" returns successfully" Sep 9 05:29:59.842235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d23f0222fa607eeb62c3149ef813cc275deb2b1645cfc0a9bdbf43b1ec6ad780-rootfs.mount: Deactivated successfully. Sep 9 05:30:00.655349 containerd[1591]: time="2025-09-09T05:30:00.655297556Z" level=info msg="CreateContainer within sandbox \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 05:30:00.680586 containerd[1591]: time="2025-09-09T05:30:00.680477398Z" level=info msg="Container 1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:30:00.692871 containerd[1591]: time="2025-09-09T05:30:00.692791383Z" level=info msg="CreateContainer within sandbox \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a\"" Sep 9 05:30:00.693731 containerd[1591]: time="2025-09-09T05:30:00.693673231Z" level=info msg="StartContainer for \"1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a\"" Sep 9 05:30:00.695145 containerd[1591]: time="2025-09-09T05:30:00.695108633Z" level=info msg="connecting to shim 1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a" address="unix:///run/containerd/s/ad0527ac0a83718fa5d8731ad4520c7332dd4b5d3de6959d4ddf710854cc8098" protocol=ttrpc version=3 Sep 9 05:30:00.719107 systemd[1]: Started cri-containerd-1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a.scope - libcontainer container 1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a. Sep 9 05:30:00.748411 systemd[1]: cri-containerd-1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a.scope: Deactivated successfully. Sep 9 05:30:00.749780 containerd[1591]: time="2025-09-09T05:30:00.749732909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a\" id:\"1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a\" pid:4796 exited_at:{seconds:1757395800 nanos:749143905}" Sep 9 05:30:00.905592 containerd[1591]: time="2025-09-09T05:30:00.905398042Z" level=info msg="received exit event container_id:\"1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a\" id:\"1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a\" pid:4796 exited_at:{seconds:1757395800 nanos:749143905}" Sep 9 05:30:00.909975 containerd[1591]: time="2025-09-09T05:30:00.909900272Z" level=info msg="StartContainer for \"1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a\" returns successfully" Sep 9 05:30:00.933168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f35217a24f8001c2a9cf5a62385a63269e49b562f0f49af61c22827ba730d2a-rootfs.mount: Deactivated successfully. Sep 9 05:30:01.662717 containerd[1591]: time="2025-09-09T05:30:01.662086008Z" level=info msg="CreateContainer within sandbox \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 05:30:01.909285 containerd[1591]: time="2025-09-09T05:30:01.909217203Z" level=info msg="Container cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705: CDI devices from CRI Config.CDIDevices: []" Sep 9 05:30:02.024605 containerd[1591]: time="2025-09-09T05:30:02.024540506Z" level=info msg="CreateContainer within sandbox \"f434de96390a51354c5e1ffa367a724cec406d22a6759de85cbae5bdac299097\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705\"" Sep 9 05:30:02.025231 containerd[1591]: time="2025-09-09T05:30:02.025177739Z" level=info msg="StartContainer for \"cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705\"" Sep 9 05:30:02.026386 containerd[1591]: time="2025-09-09T05:30:02.026355305Z" level=info msg="connecting to shim cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705" address="unix:///run/containerd/s/ad0527ac0a83718fa5d8731ad4520c7332dd4b5d3de6959d4ddf710854cc8098" protocol=ttrpc version=3 Sep 9 05:30:02.051207 systemd[1]: Started cri-containerd-cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705.scope - libcontainer container cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705. Sep 9 05:30:02.091931 containerd[1591]: time="2025-09-09T05:30:02.091840437Z" level=info msg="StartContainer for \"cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705\" returns successfully" Sep 9 05:30:02.171171 containerd[1591]: time="2025-09-09T05:30:02.171090291Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705\" id:\"54d6799c3a58e3eb905b47d0030da7d264e1a5cebfe30b2433720c8e52e6d768\" pid:4862 exited_at:{seconds:1757395802 nanos:170701807}" Sep 9 05:30:02.568949 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 05:30:03.479676 containerd[1591]: time="2025-09-09T05:30:03.479610843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705\" id:\"42950f0061a56245db69bb50fccc645d939d98218f4581f2b441660d310ef12a\" pid:4937 exit_status:1 exited_at:{seconds:1757395803 nanos:479206149}" Sep 9 05:30:05.593168 containerd[1591]: time="2025-09-09T05:30:05.593096375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705\" id:\"bcba84b13c9d3d2e12039978501ce349a9a2b3a1859307b214c1ab21be5e87d2\" pid:5305 exit_status:1 exited_at:{seconds:1757395805 nanos:592476594}" Sep 9 05:30:05.831418 systemd-networkd[1472]: lxc_health: Link UP Sep 9 05:30:05.835594 systemd-networkd[1472]: lxc_health: Gained carrier Sep 9 05:30:07.167942 kubelet[2762]: I0909 05:30:07.167296 2762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-llqw4" podStartSLOduration=11.167249375 podStartE2EDuration="11.167249375s" podCreationTimestamp="2025-09-09 05:29:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 05:30:02.688752575 +0000 UTC m=+108.530195335" watchObservedRunningTime="2025-09-09 05:30:07.167249375 +0000 UTC m=+113.008692155" Sep 9 05:30:07.224232 systemd-networkd[1472]: lxc_health: Gained IPv6LL Sep 9 05:30:07.739682 containerd[1591]: time="2025-09-09T05:30:07.739603517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705\" id:\"fca4851b405e0cf10d6cde308b261c5f1982399085038a1dbcd4f1e67013b3c7\" pid:5425 exited_at:{seconds:1757395807 nanos:739111027}" Sep 9 05:30:09.879810 containerd[1591]: time="2025-09-09T05:30:09.879746703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705\" id:\"75f11369fdb4ebb1e3fc975b0912b608284376c79bbeb09d4c2a994413753f02\" pid:5457 exited_at:{seconds:1757395809 nanos:879121393}" Sep 9 05:30:12.010569 containerd[1591]: time="2025-09-09T05:30:12.010493852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cade292e54170129bd3bf27bec1fd6017a8a7b2e28e87102fcd524b8f7cf9705\" id:\"3d61ca0484a5b201e53a2c70e81355d34ab53932da5eb6ed9115b3530a168dc2\" pid:5481 exited_at:{seconds:1757395812 nanos:10110839}" Sep 9 05:30:12.024179 sshd[4597]: Connection closed by 10.0.0.1 port 42036 Sep 9 05:30:12.024701 sshd-session[4594]: pam_unix(sshd:session): session closed for user core Sep 9 05:30:12.031968 systemd[1]: sshd@30-10.0.0.16:22-10.0.0.1:42036.service: Deactivated successfully. Sep 9 05:30:12.035585 systemd[1]: session-31.scope: Deactivated successfully. Sep 9 05:30:12.037385 systemd-logind[1533]: Session 31 logged out. Waiting for processes to exit. Sep 9 05:30:12.039154 systemd-logind[1533]: Removed session 31.