Sep 4 05:19:37.836366 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 03:28:49 -00 2025 Sep 4 05:19:37.836400 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=19f99bd65222f80f4cb00b5100edf579df788968f4e157fc2f808292a9d6de09 Sep 4 05:19:37.836409 kernel: BIOS-provided physical RAM map: Sep 4 05:19:37.836416 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 05:19:37.836423 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 05:19:37.836431 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 05:19:37.836439 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 4 05:19:37.836445 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 4 05:19:37.836458 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 4 05:19:37.836465 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 4 05:19:37.836471 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 05:19:37.836478 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 05:19:37.836484 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 05:19:37.836491 kernel: NX (Execute Disable) protection: active Sep 4 05:19:37.836502 kernel: APIC: Static calls initialized Sep 4 05:19:37.836509 kernel: SMBIOS 2.8 present. Sep 4 05:19:37.836518 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 05:19:37.836526 kernel: DMI: Memory slots populated: 1/1 Sep 4 05:19:37.836533 kernel: Hypervisor detected: KVM Sep 4 05:19:37.836540 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 05:19:37.836547 kernel: kvm-clock: using sched offset of 4534924331 cycles Sep 4 05:19:37.836554 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 05:19:37.836562 kernel: tsc: Detected 2794.748 MHz processor Sep 4 05:19:37.836570 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 05:19:37.836580 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 05:19:37.836587 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 4 05:19:37.836595 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 05:19:37.836602 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 05:19:37.836609 kernel: Using GB pages for direct mapping Sep 4 05:19:37.836617 kernel: ACPI: Early table checksum verification disabled Sep 4 05:19:37.836624 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 4 05:19:37.836632 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 05:19:37.836641 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 05:19:37.836649 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 05:19:37.836656 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 05:19:37.836663 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 05:19:37.836671 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 05:19:37.836678 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 05:19:37.836685 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 05:19:37.836693 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 4 05:19:37.836705 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 4 05:19:37.836713 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 05:19:37.836720 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 4 05:19:37.836728 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 4 05:19:37.836748 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 4 05:19:37.836756 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 4 05:19:37.836785 kernel: No NUMA configuration found Sep 4 05:19:37.836793 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 4 05:19:37.836801 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 4 05:19:37.836808 kernel: Zone ranges: Sep 4 05:19:37.836816 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 05:19:37.836824 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 4 05:19:37.836831 kernel: Normal empty Sep 4 05:19:37.836838 kernel: Device empty Sep 4 05:19:37.836846 kernel: Movable zone start for each node Sep 4 05:19:37.836858 kernel: Early memory node ranges Sep 4 05:19:37.836869 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 05:19:37.836877 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 4 05:19:37.836884 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 4 05:19:37.836892 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 05:19:37.836899 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 05:19:37.836910 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 4 05:19:37.836918 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 05:19:37.836927 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 05:19:37.836935 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 05:19:37.836945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 05:19:37.836953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 05:19:37.836963 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 05:19:37.836970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 05:19:37.836978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 05:19:37.836986 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 05:19:37.836993 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 05:19:37.837000 kernel: TSC deadline timer available Sep 4 05:19:37.837008 kernel: CPU topo: Max. logical packages: 1 Sep 4 05:19:37.837017 kernel: CPU topo: Max. logical dies: 1 Sep 4 05:19:37.837025 kernel: CPU topo: Max. dies per package: 1 Sep 4 05:19:37.837032 kernel: CPU topo: Max. threads per core: 1 Sep 4 05:19:37.837039 kernel: CPU topo: Num. cores per package: 4 Sep 4 05:19:37.837047 kernel: CPU topo: Num. threads per package: 4 Sep 4 05:19:37.837054 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 4 05:19:37.837063 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 05:19:37.837073 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 05:19:37.837082 kernel: kvm-guest: setup PV sched yield Sep 4 05:19:37.837119 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 4 05:19:37.837130 kernel: Booting paravirtualized kernel on KVM Sep 4 05:19:37.837138 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 05:19:37.837146 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 05:19:37.837153 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 4 05:19:37.837161 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 4 05:19:37.837169 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 05:19:37.837176 kernel: kvm-guest: PV spinlocks enabled Sep 4 05:19:37.837184 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 05:19:37.837193 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=19f99bd65222f80f4cb00b5100edf579df788968f4e157fc2f808292a9d6de09 Sep 4 05:19:37.837203 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 05:19:37.837211 kernel: random: crng init done Sep 4 05:19:37.837218 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 05:19:37.837226 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 05:19:37.837234 kernel: Fallback order for Node 0: 0 Sep 4 05:19:37.837241 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 4 05:19:37.837249 kernel: Policy zone: DMA32 Sep 4 05:19:37.837257 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 05:19:37.837267 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 05:19:37.837274 kernel: ftrace: allocating 40102 entries in 157 pages Sep 4 05:19:37.837282 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 05:19:37.837290 kernel: Dynamic Preempt: voluntary Sep 4 05:19:37.837297 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 05:19:37.837306 kernel: rcu: RCU event tracing is enabled. Sep 4 05:19:37.837313 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 05:19:37.837321 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 05:19:37.837332 kernel: Rude variant of Tasks RCU enabled. Sep 4 05:19:37.837342 kernel: Tracing variant of Tasks RCU enabled. Sep 4 05:19:37.837350 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 05:19:37.837358 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 05:19:37.837365 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 05:19:37.837380 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 05:19:37.837388 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 05:19:37.837396 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 05:19:37.837404 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 05:19:37.837420 kernel: Console: colour VGA+ 80x25 Sep 4 05:19:37.837428 kernel: printk: legacy console [ttyS0] enabled Sep 4 05:19:37.837436 kernel: ACPI: Core revision 20240827 Sep 4 05:19:37.837445 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 05:19:37.837455 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 05:19:37.837463 kernel: x2apic enabled Sep 4 05:19:37.837471 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 05:19:37.837481 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 05:19:37.837490 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 05:19:37.837500 kernel: kvm-guest: setup PV IPIs Sep 4 05:19:37.837508 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 05:19:37.837516 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 4 05:19:37.837524 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 4 05:19:37.837532 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 05:19:37.837540 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 05:19:37.837548 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 05:19:37.837556 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 05:19:37.837566 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 05:19:37.837574 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 05:19:37.837582 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 05:19:37.837590 kernel: active return thunk: retbleed_return_thunk Sep 4 05:19:37.837598 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 05:19:37.837606 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 05:19:37.837614 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 05:19:37.837621 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 05:19:37.837630 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 05:19:37.837641 kernel: active return thunk: srso_return_thunk Sep 4 05:19:37.837649 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 05:19:37.837657 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 05:19:37.837665 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 05:19:37.837673 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 05:19:37.837681 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 05:19:37.837689 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 05:19:37.837697 kernel: Freeing SMP alternatives memory: 32K Sep 4 05:19:37.837705 kernel: pid_max: default: 32768 minimum: 301 Sep 4 05:19:37.837714 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 05:19:37.837733 kernel: landlock: Up and running. Sep 4 05:19:37.837742 kernel: SELinux: Initializing. Sep 4 05:19:37.837762 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 05:19:37.837770 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 05:19:37.837778 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 05:19:37.837786 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 05:19:37.837794 kernel: ... version: 0 Sep 4 05:19:37.837802 kernel: ... bit width: 48 Sep 4 05:19:37.837818 kernel: ... generic registers: 6 Sep 4 05:19:37.837827 kernel: ... value mask: 0000ffffffffffff Sep 4 05:19:37.837835 kernel: ... max period: 00007fffffffffff Sep 4 05:19:37.837842 kernel: ... fixed-purpose events: 0 Sep 4 05:19:37.837850 kernel: ... event mask: 000000000000003f Sep 4 05:19:37.837858 kernel: signal: max sigframe size: 1776 Sep 4 05:19:37.837866 kernel: rcu: Hierarchical SRCU implementation. Sep 4 05:19:37.837874 kernel: rcu: Max phase no-delay instances is 400. Sep 4 05:19:37.837882 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 05:19:37.837893 kernel: smp: Bringing up secondary CPUs ... Sep 4 05:19:37.837901 kernel: smpboot: x86: Booting SMP configuration: Sep 4 05:19:37.837909 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 05:19:37.837917 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 05:19:37.837925 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 4 05:19:37.837933 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54064K init, 2904K bss, 136904K reserved, 0K cma-reserved) Sep 4 05:19:37.837941 kernel: devtmpfs: initialized Sep 4 05:19:37.837949 kernel: x86/mm: Memory block size: 128MB Sep 4 05:19:37.837957 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 05:19:37.837967 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 05:19:37.837975 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 05:19:37.837986 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 05:19:37.837994 kernel: audit: initializing netlink subsys (disabled) Sep 4 05:19:37.838002 kernel: audit: type=2000 audit(1756963175.592:1): state=initialized audit_enabled=0 res=1 Sep 4 05:19:37.838010 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 05:19:37.838018 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 05:19:37.838026 kernel: cpuidle: using governor menu Sep 4 05:19:37.838034 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 05:19:37.838044 kernel: dca service started, version 1.12.1 Sep 4 05:19:37.838052 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 4 05:19:37.838060 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 4 05:19:37.838068 kernel: PCI: Using configuration type 1 for base access Sep 4 05:19:37.838076 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 05:19:37.838095 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 05:19:37.838104 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 05:19:37.838112 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 05:19:37.838120 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 05:19:37.838131 kernel: ACPI: Added _OSI(Module Device) Sep 4 05:19:37.838138 kernel: ACPI: Added _OSI(Processor Device) Sep 4 05:19:37.838146 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 05:19:37.838154 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 05:19:37.838162 kernel: ACPI: Interpreter enabled Sep 4 05:19:37.838170 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 05:19:37.838178 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 05:19:37.838186 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 05:19:37.838194 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 05:19:37.838204 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 05:19:37.838212 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 05:19:37.838425 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 05:19:37.838559 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 05:19:37.838683 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 05:19:37.838694 kernel: PCI host bridge to bus 0000:00 Sep 4 05:19:37.838830 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 05:19:37.838950 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 05:19:37.839060 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 05:19:37.839190 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 4 05:19:37.839325 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 4 05:19:37.839451 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 4 05:19:37.839563 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 05:19:37.839728 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 4 05:19:37.839871 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 05:19:37.839993 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 4 05:19:37.840211 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 4 05:19:37.840337 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 4 05:19:37.840469 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 05:19:37.840611 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 05:19:37.840741 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 4 05:19:37.840863 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 4 05:19:37.840983 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 05:19:37.841179 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 4 05:19:37.841307 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 4 05:19:37.841439 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 4 05:19:37.841565 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 05:19:37.841708 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 05:19:37.841832 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 4 05:19:37.841957 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 4 05:19:37.842112 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 05:19:37.842272 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 4 05:19:37.842459 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 4 05:19:37.842591 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 05:19:37.842735 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 4 05:19:37.842871 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 4 05:19:37.842993 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 4 05:19:37.843683 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 4 05:19:37.843811 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 4 05:19:37.843822 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 05:19:37.843835 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 05:19:37.843843 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 05:19:37.843851 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 05:19:37.843859 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 05:19:37.843867 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 05:19:37.843875 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 05:19:37.843883 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 05:19:37.843890 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 05:19:37.843898 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 05:19:37.843908 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 05:19:37.843916 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 05:19:37.843924 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 05:19:37.843932 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 05:19:37.843940 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 05:19:37.843948 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 05:19:37.843956 kernel: iommu: Default domain type: Translated Sep 4 05:19:37.843964 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 05:19:37.843972 kernel: PCI: Using ACPI for IRQ routing Sep 4 05:19:37.843982 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 05:19:37.843990 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 05:19:37.843998 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 4 05:19:37.844136 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 05:19:37.844258 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 05:19:37.844387 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 05:19:37.844398 kernel: vgaarb: loaded Sep 4 05:19:37.844407 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 05:19:37.844419 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 05:19:37.844427 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 05:19:37.844436 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 05:19:37.844483 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 05:19:37.844492 kernel: pnp: PnP ACPI init Sep 4 05:19:37.844657 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 4 05:19:37.844669 kernel: pnp: PnP ACPI: found 6 devices Sep 4 05:19:37.844677 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 05:19:37.844686 kernel: NET: Registered PF_INET protocol family Sep 4 05:19:37.844697 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 05:19:37.844705 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 05:19:37.844713 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 05:19:37.844722 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 05:19:37.844730 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 05:19:37.844738 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 05:19:37.844745 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 05:19:37.844753 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 05:19:37.844763 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 05:19:37.844771 kernel: NET: Registered PF_XDP protocol family Sep 4 05:19:37.844884 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 05:19:37.844995 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 05:19:37.845121 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 05:19:37.845239 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 4 05:19:37.845352 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 4 05:19:37.845472 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 4 05:19:37.845483 kernel: PCI: CLS 0 bytes, default 64 Sep 4 05:19:37.845495 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 4 05:19:37.845503 kernel: Initialise system trusted keyrings Sep 4 05:19:37.845511 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 05:19:37.845519 kernel: Key type asymmetric registered Sep 4 05:19:37.845527 kernel: Asymmetric key parser 'x509' registered Sep 4 05:19:37.845535 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 05:19:37.845543 kernel: io scheduler mq-deadline registered Sep 4 05:19:37.845551 kernel: io scheduler kyber registered Sep 4 05:19:37.845559 kernel: io scheduler bfq registered Sep 4 05:19:37.845570 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 05:19:37.845578 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 05:19:37.845586 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 05:19:37.845594 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 05:19:37.845602 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 05:19:37.845610 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 05:19:37.845618 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 05:19:37.845627 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 05:19:37.845634 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 05:19:37.845763 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 05:19:37.845775 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 05:19:37.845888 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 05:19:37.846014 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T05:19:37 UTC (1756963177) Sep 4 05:19:37.846148 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 05:19:37.846159 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 05:19:37.846167 kernel: NET: Registered PF_INET6 protocol family Sep 4 05:19:37.846175 kernel: Segment Routing with IPv6 Sep 4 05:19:37.846187 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 05:19:37.846195 kernel: NET: Registered PF_PACKET protocol family Sep 4 05:19:37.846203 kernel: Key type dns_resolver registered Sep 4 05:19:37.846211 kernel: IPI shorthand broadcast: enabled Sep 4 05:19:37.846219 kernel: sched_clock: Marking stable (3294003455, 107965637)->(3421276076, -19306984) Sep 4 05:19:37.846227 kernel: registered taskstats version 1 Sep 4 05:19:37.846235 kernel: Loading compiled-in X.509 certificates Sep 4 05:19:37.846243 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 96a8a78d4103eb94fb67942e49826b5b0aa3ea41' Sep 4 05:19:37.846252 kernel: Demotion targets for Node 0: null Sep 4 05:19:37.846265 kernel: Key type .fscrypt registered Sep 4 05:19:37.846275 kernel: Key type fscrypt-provisioning registered Sep 4 05:19:37.846285 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 05:19:37.846295 kernel: ima: Allocated hash algorithm: sha1 Sep 4 05:19:37.846306 kernel: ima: No architecture policies found Sep 4 05:19:37.846315 kernel: clk: Disabling unused clocks Sep 4 05:19:37.846323 kernel: Warning: unable to open an initial console. Sep 4 05:19:37.846331 kernel: Freeing unused kernel image (initmem) memory: 54064K Sep 4 05:19:37.846342 kernel: Write protecting the kernel read-only data: 24576k Sep 4 05:19:37.846350 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 4 05:19:37.846358 kernel: Run /init as init process Sep 4 05:19:37.846366 kernel: with arguments: Sep 4 05:19:37.846382 kernel: /init Sep 4 05:19:37.846390 kernel: with environment: Sep 4 05:19:37.846397 kernel: HOME=/ Sep 4 05:19:37.846405 kernel: TERM=linux Sep 4 05:19:37.846413 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 05:19:37.846422 systemd[1]: Successfully made /usr/ read-only. Sep 4 05:19:37.846445 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 05:19:37.846456 systemd[1]: Detected virtualization kvm. Sep 4 05:19:37.846465 systemd[1]: Detected architecture x86-64. Sep 4 05:19:37.846473 systemd[1]: Running in initrd. Sep 4 05:19:37.846481 systemd[1]: No hostname configured, using default hostname. Sep 4 05:19:37.846493 systemd[1]: Hostname set to . Sep 4 05:19:37.846501 systemd[1]: Initializing machine ID from VM UUID. Sep 4 05:19:37.846510 systemd[1]: Queued start job for default target initrd.target. Sep 4 05:19:37.846518 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 05:19:37.846530 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 05:19:37.846539 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 05:19:37.846548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 05:19:37.846557 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 05:19:37.846568 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 05:19:37.846578 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 05:19:37.846587 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 05:19:37.846597 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 05:19:37.846608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 05:19:37.846620 systemd[1]: Reached target paths.target - Path Units. Sep 4 05:19:37.846636 systemd[1]: Reached target slices.target - Slice Units. Sep 4 05:19:37.846647 systemd[1]: Reached target swap.target - Swaps. Sep 4 05:19:37.846656 systemd[1]: Reached target timers.target - Timer Units. Sep 4 05:19:37.846665 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 05:19:37.846673 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 05:19:37.846682 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 05:19:37.846691 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 05:19:37.846700 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 05:19:37.846710 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 05:19:37.846724 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 05:19:37.846735 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 05:19:37.846747 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 05:19:37.846759 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 05:19:37.846774 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 05:19:37.846789 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 05:19:37.846801 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 05:19:37.846813 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 05:19:37.846825 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 05:19:37.846837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 05:19:37.846849 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 05:19:37.846868 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 05:19:37.846880 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 05:19:37.846892 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 05:19:37.846933 systemd-journald[220]: Collecting audit messages is disabled. Sep 4 05:19:37.846957 systemd-journald[220]: Journal started Sep 4 05:19:37.846978 systemd-journald[220]: Runtime Journal (/run/log/journal/9d22143419f64bdf979eb2a57ebf63ae) is 6M, max 48.6M, 42.5M free. Sep 4 05:19:37.835483 systemd-modules-load[221]: Inserted module 'overlay' Sep 4 05:19:37.851663 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 05:19:37.852495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 05:19:37.890433 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 05:19:37.890471 kernel: Bridge firewalling registered Sep 4 05:19:37.866446 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 4 05:19:37.894239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 05:19:37.895704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 05:19:37.898138 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 05:19:37.903255 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 05:19:37.906870 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 05:19:37.907428 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 05:19:37.914914 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 05:19:37.918692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 05:19:37.926446 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 05:19:37.929272 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 05:19:37.929592 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 05:19:37.946685 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 05:19:37.948236 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 05:19:37.967884 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=19f99bd65222f80f4cb00b5100edf579df788968f4e157fc2f808292a9d6de09 Sep 4 05:19:37.985854 systemd-resolved[255]: Positive Trust Anchors: Sep 4 05:19:37.985881 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 05:19:37.985911 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 05:19:37.988881 systemd-resolved[255]: Defaulting to hostname 'linux'. Sep 4 05:19:37.990136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 05:19:37.995235 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 05:19:38.082126 kernel: SCSI subsystem initialized Sep 4 05:19:38.091111 kernel: Loading iSCSI transport class v2.0-870. Sep 4 05:19:38.102121 kernel: iscsi: registered transport (tcp) Sep 4 05:19:38.124124 kernel: iscsi: registered transport (qla4xxx) Sep 4 05:19:38.124163 kernel: QLogic iSCSI HBA Driver Sep 4 05:19:38.144219 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 05:19:38.163504 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 05:19:38.164861 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 05:19:38.221721 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 05:19:38.225121 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 05:19:38.279130 kernel: raid6: avx2x4 gen() 29273 MB/s Sep 4 05:19:38.296133 kernel: raid6: avx2x2 gen() 30206 MB/s Sep 4 05:19:38.313228 kernel: raid6: avx2x1 gen() 16855 MB/s Sep 4 05:19:38.313266 kernel: raid6: using algorithm avx2x2 gen() 30206 MB/s Sep 4 05:19:38.331174 kernel: raid6: .... xor() 19770 MB/s, rmw enabled Sep 4 05:19:38.331241 kernel: raid6: using avx2x2 recovery algorithm Sep 4 05:19:38.356133 kernel: xor: automatically using best checksumming function avx Sep 4 05:19:38.657192 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 05:19:38.668915 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 05:19:38.672832 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 05:19:38.705284 systemd-udevd[473]: Using default interface naming scheme 'v255'. Sep 4 05:19:38.711269 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 05:19:38.712510 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 05:19:38.748143 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 4 05:19:38.781941 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 05:19:38.784907 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 05:19:38.885021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 05:19:38.890721 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 05:19:38.930198 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 05:19:38.945129 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 05:19:38.945189 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 05:19:38.961133 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 4 05:19:38.964222 kernel: AES CTR mode by8 optimization enabled Sep 4 05:19:38.992141 kernel: libata version 3.00 loaded. Sep 4 05:19:38.999908 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 05:19:38.999956 kernel: GPT:9289727 != 19775487 Sep 4 05:19:38.999970 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 05:19:38.999980 kernel: GPT:9289727 != 19775487 Sep 4 05:19:39.000424 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 05:19:39.002108 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 05:19:39.002279 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 05:19:39.002495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 05:19:39.008222 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 05:19:39.011360 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 05:19:39.011621 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 05:19:39.010456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 05:19:39.021010 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 4 05:19:39.021295 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 4 05:19:39.021540 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 05:19:39.021781 kernel: scsi host0: ahci Sep 4 05:19:39.021978 kernel: scsi host1: ahci Sep 4 05:19:39.022207 kernel: scsi host2: ahci Sep 4 05:19:39.020870 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 05:19:39.026124 kernel: scsi host3: ahci Sep 4 05:19:39.028119 kernel: scsi host4: ahci Sep 4 05:19:39.029192 kernel: scsi host5: ahci Sep 4 05:19:39.029409 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 4 05:19:39.030983 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 4 05:19:39.031005 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 4 05:19:39.031877 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 4 05:19:39.032776 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 4 05:19:39.033708 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 4 05:19:39.085854 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 05:19:39.106295 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 05:19:39.138267 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 05:19:39.139745 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 05:19:39.149924 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 05:19:39.151223 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 05:19:39.154483 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 05:19:39.249714 disk-uuid[635]: Primary Header is updated. Sep 4 05:19:39.249714 disk-uuid[635]: Secondary Entries is updated. Sep 4 05:19:39.249714 disk-uuid[635]: Secondary Header is updated. Sep 4 05:19:39.254120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 05:19:39.258119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 05:19:39.344118 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 05:19:39.344178 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 05:19:39.346111 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 05:19:39.369955 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 05:19:39.369979 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 05:19:39.370154 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 05:19:39.371108 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 05:19:39.372421 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 05:19:39.372472 kernel: ata3.00: applying bridge limits Sep 4 05:19:39.372483 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 05:19:39.374103 kernel: ata3.00: configured for UDMA/100 Sep 4 05:19:39.376182 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 05:19:39.424140 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 05:19:39.424488 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 05:19:39.448144 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 05:19:39.711674 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 05:19:39.714420 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 05:19:39.716880 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 05:19:39.719224 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 05:19:39.722657 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 05:19:39.764288 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 05:19:40.259123 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 05:19:40.259721 disk-uuid[636]: The operation has completed successfully. Sep 4 05:19:40.292733 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 05:19:40.292882 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 05:19:40.321785 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 05:19:40.350930 sh[665]: Success Sep 4 05:19:40.371137 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 05:19:40.371215 kernel: device-mapper: uevent: version 1.0.3 Sep 4 05:19:40.372779 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 05:19:40.383121 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 05:19:40.416295 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 05:19:40.420693 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 05:19:40.433566 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 05:19:40.440559 kernel: BTRFS: device fsid 86622aa7-37c8-4500-9ab9-2b58ba08fd01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (677) Sep 4 05:19:40.440593 kernel: BTRFS info (device dm-0): first mount of filesystem 86622aa7-37c8-4500-9ab9-2b58ba08fd01 Sep 4 05:19:40.440609 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 05:19:40.446472 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 05:19:40.446495 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 05:19:40.447762 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 05:19:40.449166 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 05:19:40.450535 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 05:19:40.451363 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 05:19:40.452990 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 05:19:40.528681 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 4 05:19:40.528733 kernel: BTRFS info (device vda6): first mount of filesystem d2708db5-bd9b-4e1d-a33a-2fa9c5aa5c75 Sep 4 05:19:40.528745 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 05:19:40.533204 kernel: BTRFS info (device vda6): turning on async discard Sep 4 05:19:40.533228 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 05:19:40.538131 kernel: BTRFS info (device vda6): last unmount of filesystem d2708db5-bd9b-4e1d-a33a-2fa9c5aa5c75 Sep 4 05:19:40.539500 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 05:19:40.541695 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 05:19:40.643305 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 05:19:40.645860 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 05:19:40.729370 ignition[751]: Ignition 2.22.0 Sep 4 05:19:40.729385 ignition[751]: Stage: fetch-offline Sep 4 05:19:40.729446 ignition[751]: no configs at "/usr/lib/ignition/base.d" Sep 4 05:19:40.729458 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 05:19:40.729576 ignition[751]: parsed url from cmdline: "" Sep 4 05:19:40.729580 ignition[751]: no config URL provided Sep 4 05:19:40.729585 ignition[751]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 05:19:40.729594 ignition[751]: no config at "/usr/lib/ignition/user.ign" Sep 4 05:19:40.729619 ignition[751]: op(1): [started] loading QEMU firmware config module Sep 4 05:19:40.729626 ignition[751]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 05:19:40.737796 ignition[751]: op(1): [finished] loading QEMU firmware config module Sep 4 05:19:40.737819 ignition[751]: QEMU firmware config was not found. Ignoring... Sep 4 05:19:40.741238 systemd-networkd[851]: lo: Link UP Sep 4 05:19:40.741242 systemd-networkd[851]: lo: Gained carrier Sep 4 05:19:40.742835 systemd-networkd[851]: Enumeration completed Sep 4 05:19:40.743251 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 05:19:40.743255 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 05:19:40.743276 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 05:19:40.745014 systemd-networkd[851]: eth0: Link UP Sep 4 05:19:40.745048 systemd[1]: Reached target network.target - Network. Sep 4 05:19:40.745181 systemd-networkd[851]: eth0: Gained carrier Sep 4 05:19:40.745190 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 05:19:40.778129 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 05:19:40.802263 ignition[751]: parsing config with SHA512: 1947331467a669bc8944272f22a1fc0adfd9596c68eec3c221c123f5c1233d10e2182a0e7f8e683f535dd2cd66dc27c424823137c85e3c7875bd582bf75dc9e6 Sep 4 05:19:40.809430 unknown[751]: fetched base config from "system" Sep 4 05:19:40.809442 unknown[751]: fetched user config from "qemu" Sep 4 05:19:40.809842 ignition[751]: fetch-offline: fetch-offline passed Sep 4 05:19:40.809905 ignition[751]: Ignition finished successfully Sep 4 05:19:40.827300 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 05:19:40.830541 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 05:19:40.831776 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 05:19:40.883645 ignition[860]: Ignition 2.22.0 Sep 4 05:19:40.883664 ignition[860]: Stage: kargs Sep 4 05:19:40.883832 ignition[860]: no configs at "/usr/lib/ignition/base.d" Sep 4 05:19:40.883847 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 05:19:40.884890 ignition[860]: kargs: kargs passed Sep 4 05:19:40.884950 ignition[860]: Ignition finished successfully Sep 4 05:19:40.891143 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 05:19:40.894223 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 05:19:40.939033 ignition[868]: Ignition 2.22.0 Sep 4 05:19:40.939047 ignition[868]: Stage: disks Sep 4 05:19:40.939238 ignition[868]: no configs at "/usr/lib/ignition/base.d" Sep 4 05:19:40.939248 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 05:19:40.940323 ignition[868]: disks: disks passed Sep 4 05:19:40.940381 ignition[868]: Ignition finished successfully Sep 4 05:19:40.946649 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 05:19:40.948806 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 05:19:40.949934 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 05:19:40.952076 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 05:19:40.954141 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 05:19:40.955953 systemd[1]: Reached target basic.target - Basic System. Sep 4 05:19:40.957058 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 05:19:40.994458 systemd-fsck[878]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 05:19:41.013898 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 05:19:41.015216 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 05:19:41.138141 kernel: EXT4-fs (vda9): mounted filesystem 8bd68cb6-48ee-4381-8beb-7fde3b1f33fd r/w with ordered data mode. Quota mode: none. Sep 4 05:19:41.138740 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 05:19:41.139517 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 05:19:41.143310 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 05:19:41.145258 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 05:19:41.147401 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 05:19:41.147472 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 05:19:41.147508 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 05:19:41.166402 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 05:19:41.168382 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 05:19:41.174270 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Sep 4 05:19:41.174304 kernel: BTRFS info (device vda6): first mount of filesystem d2708db5-bd9b-4e1d-a33a-2fa9c5aa5c75 Sep 4 05:19:41.174315 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 05:19:41.177824 kernel: BTRFS info (device vda6): turning on async discard Sep 4 05:19:41.177848 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 05:19:41.180186 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 05:19:41.220672 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 05:19:41.225687 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Sep 4 05:19:41.231066 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 05:19:41.235425 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 05:19:41.362119 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 05:19:41.364676 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 05:19:41.367072 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 05:19:41.388101 kernel: BTRFS info (device vda6): last unmount of filesystem d2708db5-bd9b-4e1d-a33a-2fa9c5aa5c75 Sep 4 05:19:41.403771 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 05:19:41.440603 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 05:19:41.481487 ignition[1001]: INFO : Ignition 2.22.0 Sep 4 05:19:41.481487 ignition[1001]: INFO : Stage: mount Sep 4 05:19:41.483829 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 05:19:41.483829 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 05:19:41.483829 ignition[1001]: INFO : mount: mount passed Sep 4 05:19:41.483829 ignition[1001]: INFO : Ignition finished successfully Sep 4 05:19:41.485734 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 05:19:41.488753 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 05:19:41.520661 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 05:19:41.543104 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Sep 4 05:19:41.545227 kernel: BTRFS info (device vda6): first mount of filesystem d2708db5-bd9b-4e1d-a33a-2fa9c5aa5c75 Sep 4 05:19:41.545251 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 05:19:41.550388 kernel: BTRFS info (device vda6): turning on async discard Sep 4 05:19:41.550416 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 05:19:41.552598 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 05:19:41.646697 ignition[1030]: INFO : Ignition 2.22.0 Sep 4 05:19:41.646697 ignition[1030]: INFO : Stage: files Sep 4 05:19:41.648603 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 05:19:41.648603 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 05:19:41.650741 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 4 05:19:41.651948 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 05:19:41.651948 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 05:19:41.654709 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 05:19:41.654709 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 05:19:41.657447 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 05:19:41.656584 unknown[1030]: wrote ssh authorized keys file for user: core Sep 4 05:19:41.660216 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 05:19:41.660216 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 4 05:19:41.697721 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 05:19:41.830289 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 4 05:19:41.832363 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 05:19:41.832363 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 05:19:42.005198 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 05:19:42.262597 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 05:19:42.262597 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 05:19:42.270058 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 05:19:42.270058 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 05:19:42.270058 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 05:19:42.270058 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 05:19:42.270058 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 05:19:42.270058 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 05:19:42.270058 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 05:19:42.392188 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 05:19:42.394434 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 05:19:42.394434 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 05:19:42.399637 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 05:19:42.399637 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 05:19:42.399637 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 4 05:19:42.601375 systemd-networkd[851]: eth0: Gained IPv6LL Sep 4 05:19:42.812791 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 05:19:43.375671 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 4 05:19:43.375671 ignition[1030]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 05:19:43.379428 ignition[1030]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 05:19:43.568916 ignition[1030]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 05:19:43.568916 ignition[1030]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 05:19:43.568916 ignition[1030]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 05:19:43.568916 ignition[1030]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 05:19:43.577114 ignition[1030]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 05:19:43.577114 ignition[1030]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 05:19:43.577114 ignition[1030]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 05:19:43.595875 ignition[1030]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 05:19:43.601759 ignition[1030]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 05:19:43.603453 ignition[1030]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 05:19:43.603453 ignition[1030]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 05:19:43.603453 ignition[1030]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 05:19:43.603453 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 05:19:43.603453 ignition[1030]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 05:19:43.603453 ignition[1030]: INFO : files: files passed Sep 4 05:19:43.603453 ignition[1030]: INFO : Ignition finished successfully Sep 4 05:19:43.611025 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 05:19:43.616600 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 05:19:43.621198 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 05:19:43.644814 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 05:19:43.645039 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 05:19:43.650251 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 05:19:43.654456 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 05:19:43.656342 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 05:19:43.658076 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 05:19:43.660284 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 05:19:43.661065 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 05:19:43.665386 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 05:19:43.735669 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 05:19:43.745131 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 05:19:43.747669 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 05:19:43.749605 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 05:19:43.751560 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 05:19:43.753771 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 05:19:43.789243 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 05:19:43.791904 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 05:19:43.823207 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 05:19:43.824501 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 05:19:43.826685 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 05:19:43.828776 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 05:19:43.828899 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 05:19:43.831243 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 05:19:43.832719 systemd[1]: Stopped target basic.target - Basic System. Sep 4 05:19:43.834698 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 05:19:43.836681 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 05:19:43.838713 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 05:19:43.840812 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 05:19:43.843028 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 05:19:43.845035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 05:19:43.847253 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 05:19:43.849290 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 05:19:43.851380 systemd[1]: Stopped target swap.target - Swaps. Sep 4 05:19:43.853146 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 05:19:43.853281 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 05:19:43.855487 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 05:19:43.856894 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 05:19:43.858930 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 05:19:43.859055 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 05:19:43.861123 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 05:19:43.861248 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 05:19:43.863587 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 05:19:43.863703 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 05:19:43.865484 systemd[1]: Stopped target paths.target - Path Units. Sep 4 05:19:43.867188 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 05:19:43.871174 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 05:19:43.872796 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 05:19:43.874736 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 05:19:43.876494 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 05:19:43.876584 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 05:19:43.878455 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 05:19:43.878538 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 05:19:43.880877 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 05:19:43.880988 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 05:19:43.882931 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 05:19:43.883052 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 05:19:43.885807 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 05:19:43.887710 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 05:19:43.887831 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 05:19:43.890994 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 05:19:43.892126 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 05:19:43.892300 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 05:19:43.894499 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 05:19:43.894660 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 05:19:43.900803 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 05:19:43.910292 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 05:19:43.933039 ignition[1085]: INFO : Ignition 2.22.0 Sep 4 05:19:43.934332 ignition[1085]: INFO : Stage: umount Sep 4 05:19:43.934332 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 05:19:43.934332 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 05:19:43.933669 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 05:19:43.941758 ignition[1085]: INFO : umount: umount passed Sep 4 05:19:43.943032 ignition[1085]: INFO : Ignition finished successfully Sep 4 05:19:43.945781 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 05:19:43.945955 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 05:19:43.947165 systemd[1]: Stopped target network.target - Network. Sep 4 05:19:43.948849 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 05:19:43.948938 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 05:19:43.950678 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 05:19:43.950744 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 05:19:43.951005 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 05:19:43.951071 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 05:19:43.951378 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 05:19:43.951434 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 05:19:43.951858 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 05:19:43.958504 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 05:19:43.969433 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 05:19:43.969636 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 05:19:43.974069 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 05:19:43.974321 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 05:19:43.974456 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 05:19:43.979718 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 05:19:43.981274 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 05:19:43.983626 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 05:19:43.983693 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 05:19:43.987063 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 05:19:43.988195 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 05:19:43.988279 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 05:19:43.989345 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 05:19:43.989407 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 05:19:43.993551 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 05:19:43.993615 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 05:19:43.994663 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 05:19:43.994731 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 05:19:43.999304 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 05:19:44.003896 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 05:19:44.003983 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 05:19:44.020967 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 05:19:44.022257 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 05:19:44.025381 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 05:19:44.025545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 05:19:44.048228 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 05:19:44.048338 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 05:19:44.049365 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 05:19:44.049404 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 05:19:44.052193 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 05:19:44.052264 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 05:19:44.053529 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 05:19:44.053581 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 05:19:44.054277 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 05:19:44.054324 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 05:19:44.055808 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 05:19:44.061944 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 05:19:44.062004 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 05:19:44.066374 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 05:19:44.066431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 05:19:44.069734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 05:19:44.069782 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 05:19:44.074135 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 05:19:44.074194 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 05:19:44.074253 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 05:19:44.083261 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 05:19:44.083398 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 05:19:44.326222 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 05:19:44.326362 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 05:19:44.328396 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 05:19:44.329999 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 05:19:44.330054 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 05:19:44.332828 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 05:19:44.353016 systemd[1]: Switching root. Sep 4 05:19:44.388070 systemd-journald[220]: Journal stopped Sep 4 05:19:45.762497 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 4 05:19:45.762567 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 05:19:45.762585 kernel: SELinux: policy capability open_perms=1 Sep 4 05:19:45.762597 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 05:19:45.762614 kernel: SELinux: policy capability always_check_network=0 Sep 4 05:19:45.762631 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 05:19:45.762642 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 05:19:45.762656 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 05:19:45.762667 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 05:19:45.762681 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 05:19:45.762693 kernel: audit: type=1403 audit(1756963184.928:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 05:19:45.762706 systemd[1]: Successfully loaded SELinux policy in 66.944ms. Sep 4 05:19:45.762731 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.436ms. Sep 4 05:19:45.762750 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 05:19:45.762763 systemd[1]: Detected virtualization kvm. Sep 4 05:19:45.762775 systemd[1]: Detected architecture x86-64. Sep 4 05:19:45.762787 systemd[1]: Detected first boot. Sep 4 05:19:45.762799 systemd[1]: Initializing machine ID from VM UUID. Sep 4 05:19:45.762819 zram_generator::config[1130]: No configuration found. Sep 4 05:19:45.762832 kernel: Guest personality initialized and is inactive Sep 4 05:19:45.762843 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 05:19:45.762855 kernel: Initialized host personality Sep 4 05:19:45.762866 kernel: NET: Registered PF_VSOCK protocol family Sep 4 05:19:45.762878 systemd[1]: Populated /etc with preset unit settings. Sep 4 05:19:45.762890 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 05:19:45.762902 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 05:19:45.762917 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 05:19:45.762929 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 05:19:45.762943 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 05:19:45.762955 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 05:19:45.762967 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 05:19:45.762979 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 05:19:45.762991 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 05:19:45.763004 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 05:19:45.763016 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 05:19:45.763031 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 05:19:45.763043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 05:19:45.763056 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 05:19:45.763067 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 05:19:45.763079 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 05:19:45.763108 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 05:19:45.763120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 05:19:45.763136 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 05:19:45.763148 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 05:19:45.763169 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 05:19:45.763182 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 05:19:45.763195 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 05:19:45.763207 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 05:19:45.763219 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 05:19:45.763232 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 05:19:45.763249 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 05:19:45.763261 systemd[1]: Reached target slices.target - Slice Units. Sep 4 05:19:45.763276 systemd[1]: Reached target swap.target - Swaps. Sep 4 05:19:45.763288 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 05:19:45.763299 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 05:19:45.763312 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 05:19:45.763324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 05:19:45.763336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 05:19:45.763348 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 05:19:45.763361 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 05:19:45.763373 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 05:19:45.763392 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 05:19:45.763405 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 05:19:45.763417 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 05:19:45.763429 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 05:19:45.763441 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 05:19:45.763453 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 05:19:45.763466 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 05:19:45.763478 systemd[1]: Reached target machines.target - Containers. Sep 4 05:19:45.763493 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 05:19:45.763507 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 05:19:45.763519 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 05:19:45.763531 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 05:19:45.763543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 05:19:45.763555 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 05:19:45.763568 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 05:19:45.763581 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 05:19:45.763593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 05:19:45.763611 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 05:19:45.763623 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 05:19:45.763635 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 05:19:45.763647 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 05:19:45.763659 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 05:19:45.763672 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 05:19:45.763684 kernel: loop: module loaded Sep 4 05:19:45.763695 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 05:19:45.763710 kernel: fuse: init (API version 7.41) Sep 4 05:19:45.763722 kernel: ACPI: bus type drm_connector registered Sep 4 05:19:45.763733 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 05:19:45.763745 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 05:19:45.763780 systemd-journald[1194]: Collecting audit messages is disabled. Sep 4 05:19:45.763808 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 05:19:45.763821 systemd-journald[1194]: Journal started Sep 4 05:19:45.763847 systemd-journald[1194]: Runtime Journal (/run/log/journal/9d22143419f64bdf979eb2a57ebf63ae) is 6M, max 48.6M, 42.5M free. Sep 4 05:19:45.468238 systemd[1]: Queued start job for default target multi-user.target. Sep 4 05:19:45.495354 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 05:19:45.495860 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 05:19:45.800991 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 05:19:45.807109 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 05:19:45.807168 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 05:19:45.807185 systemd[1]: Stopped verity-setup.service. Sep 4 05:19:45.811129 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 05:19:45.815532 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 05:19:45.816524 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 05:19:45.817699 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 05:19:45.818908 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 05:19:45.820014 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 05:19:45.821238 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 05:19:45.822450 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 05:19:45.823736 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 05:19:45.825281 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 05:19:45.825511 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 05:19:45.826986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 05:19:45.827242 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 05:19:45.857510 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 05:19:45.857775 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 05:19:45.859222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 05:19:45.859447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 05:19:45.860921 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 05:19:45.861170 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 05:19:45.862504 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 05:19:45.862718 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 05:19:45.864134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 05:19:45.865552 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 05:19:45.867198 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 05:19:45.877602 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 05:19:45.884868 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 05:19:45.889386 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 05:19:45.893204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 05:19:45.894434 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 05:19:45.894532 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 05:19:45.896542 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 05:19:45.900261 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 05:19:45.914325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 05:19:45.917216 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 05:19:45.918300 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 05:19:45.918391 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 05:19:45.919602 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 05:19:45.921289 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 05:19:45.923895 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 05:19:45.931042 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 05:19:45.936069 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 05:19:45.937538 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 05:19:45.944143 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 05:19:45.947965 systemd-journald[1194]: Time spent on flushing to /var/log/journal/9d22143419f64bdf979eb2a57ebf63ae is 20.483ms for 989 entries. Sep 4 05:19:45.947965 systemd-journald[1194]: System Journal (/var/log/journal/9d22143419f64bdf979eb2a57ebf63ae) is 8M, max 195.6M, 187.6M free. Sep 4 05:19:45.985010 systemd-journald[1194]: Received client request to flush runtime journal. Sep 4 05:19:45.985055 kernel: loop0: detected capacity change from 0 to 111000 Sep 4 05:19:45.952301 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 05:19:45.954910 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 05:19:45.959491 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 05:19:45.964214 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 05:19:45.966167 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 05:19:46.042125 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 05:19:46.043584 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 05:19:46.047524 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 05:19:46.062583 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 05:19:46.067248 kernel: loop1: detected capacity change from 0 to 128016 Sep 4 05:19:46.079263 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 05:19:46.081933 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 05:19:46.100105 kernel: loop2: detected capacity change from 0 to 224512 Sep 4 05:19:46.122327 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 4 05:19:46.122347 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 4 05:19:46.127778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 05:19:46.137125 kernel: loop3: detected capacity change from 0 to 111000 Sep 4 05:19:46.149129 kernel: loop4: detected capacity change from 0 to 128016 Sep 4 05:19:46.167116 kernel: loop5: detected capacity change from 0 to 224512 Sep 4 05:19:46.363215 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 05:19:46.363833 (sd-merge)[1272]: Merged extensions into '/usr'. Sep 4 05:19:46.368736 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 05:19:46.368753 systemd[1]: Reloading... Sep 4 05:19:46.464182 zram_generator::config[1295]: No configuration found. Sep 4 05:19:46.657111 ldconfig[1236]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 05:19:46.742896 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 05:19:46.743154 systemd[1]: Reloading finished in 373 ms. Sep 4 05:19:46.779407 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 05:19:46.781209 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 05:19:46.799913 systemd[1]: Starting ensure-sysext.service... Sep 4 05:19:46.802741 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 05:19:46.814753 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Sep 4 05:19:46.814772 systemd[1]: Reloading... Sep 4 05:19:46.835543 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 05:19:46.835582 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 05:19:46.835900 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 05:19:46.836229 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 05:19:46.837320 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 05:19:46.837613 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 4 05:19:46.837718 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 4 05:19:46.843388 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 05:19:46.843408 systemd-tmpfiles[1336]: Skipping /boot Sep 4 05:19:46.856274 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 05:19:46.856289 systemd-tmpfiles[1336]: Skipping /boot Sep 4 05:19:46.880381 zram_generator::config[1362]: No configuration found. Sep 4 05:19:47.054967 systemd[1]: Reloading finished in 239 ms. Sep 4 05:19:47.078647 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 05:19:47.108550 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 05:19:47.119539 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 05:19:47.122297 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 05:19:47.124990 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 05:19:47.131877 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 05:19:47.135597 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 05:19:47.139805 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 05:19:47.144164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 05:19:47.144348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 05:19:47.149294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 05:19:47.151670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 05:19:47.154814 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 05:19:47.156228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 05:19:47.156330 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 05:19:47.159709 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 05:19:47.160863 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 05:19:47.162330 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 05:19:47.163015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 05:19:47.169761 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 05:19:47.171719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 05:19:47.172176 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 05:19:47.173852 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 05:19:47.174099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 05:19:47.184223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 05:19:47.184444 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 05:19:47.187116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 05:19:47.188952 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Sep 4 05:19:47.190072 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 05:19:47.201020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 05:19:47.202607 augenrules[1436]: No rules Sep 4 05:19:47.203037 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 05:19:47.203177 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 05:19:47.206148 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 05:19:47.207426 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 05:19:47.209532 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 05:19:47.209850 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 05:19:47.216407 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 05:19:47.218309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 05:19:47.218534 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 05:19:47.221059 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 05:19:47.221746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 05:19:47.223748 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 05:19:47.223970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 05:19:47.226645 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 05:19:47.228694 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 05:19:47.234678 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 05:19:47.239611 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 05:19:47.253198 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 05:19:47.256328 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 05:19:47.257917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 05:19:47.261281 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 05:19:47.270715 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 05:19:47.272985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 05:19:47.276342 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 05:19:47.279424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 05:19:47.279536 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 05:19:47.286418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 05:19:47.287561 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 05:19:47.287671 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 05:19:47.289253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 05:19:47.291158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 05:19:47.292976 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 05:19:47.293240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 05:19:47.294870 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 05:19:47.295110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 05:19:47.298892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 05:19:47.299180 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 05:19:47.303062 systemd[1]: Finished ensure-sysext.service. Sep 4 05:19:47.305530 augenrules[1470]: /sbin/augenrules: No change Sep 4 05:19:47.315678 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 05:19:47.315749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 05:19:47.317919 augenrules[1509]: No rules Sep 4 05:19:47.319849 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 05:19:47.323552 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 05:19:47.323837 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 05:19:47.363266 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 05:19:47.414202 systemd-resolved[1404]: Positive Trust Anchors: Sep 4 05:19:47.414229 systemd-resolved[1404]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 05:19:47.414267 systemd-resolved[1404]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 05:19:47.435995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 05:19:47.470411 systemd-resolved[1404]: Defaulting to hostname 'linux'. Sep 4 05:19:47.491570 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 05:19:47.507841 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 05:19:47.517646 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 05:19:47.632117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 4 05:19:47.640195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 05:19:47.650266 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 05:19:47.660116 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 05:19:47.664255 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 05:19:47.664524 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 05:19:47.699163 kernel: ACPI: button: Power Button [PWRF] Sep 4 05:19:47.709800 systemd-networkd[1492]: lo: Link UP Sep 4 05:19:47.710100 systemd-networkd[1492]: lo: Gained carrier Sep 4 05:19:47.711827 systemd-networkd[1492]: Enumeration completed Sep 4 05:19:47.711972 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 05:19:47.712151 systemd[1]: Reached target network.target - Network. Sep 4 05:19:47.714317 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 05:19:47.714897 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 05:19:47.715340 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 05:19:47.718452 systemd-networkd[1492]: eth0: Link UP Sep 4 05:19:47.718730 systemd-networkd[1492]: eth0: Gained carrier Sep 4 05:19:47.718824 systemd-networkd[1492]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 05:19:47.719364 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 05:19:47.726178 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 05:19:47.726297 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 05:19:47.766153 systemd-networkd[1492]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 05:19:47.767517 systemd-timesyncd[1514]: Network configuration changed, trying to establish connection. Sep 4 05:19:49.158620 systemd-timesyncd[1514]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 05:19:49.158671 systemd-timesyncd[1514]: Initial clock synchronization to Thu 2025-09-04 05:19:49.158529 UTC. Sep 4 05:19:49.158710 systemd-resolved[1404]: Clock change detected. Flushing caches. Sep 4 05:19:49.165958 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 05:19:49.182415 kernel: kvm_amd: TSC scaling supported Sep 4 05:19:49.182538 kernel: kvm_amd: Nested Virtualization enabled Sep 4 05:19:49.182619 kernel: kvm_amd: Nested Paging enabled Sep 4 05:19:49.182660 kernel: kvm_amd: LBR virtualization supported Sep 4 05:19:49.182687 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 05:19:49.182710 kernel: kvm_amd: Virtual GIF supported Sep 4 05:19:49.213440 kernel: EDAC MC: Ver: 3.0.0 Sep 4 05:19:49.225593 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 05:19:49.227277 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 05:19:49.228907 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 05:19:49.230509 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 05:19:49.232031 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 05:19:49.233722 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 05:19:49.235316 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 05:19:49.236843 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 05:19:49.238145 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 05:19:49.238181 systemd[1]: Reached target paths.target - Path Units. Sep 4 05:19:49.239110 systemd[1]: Reached target timers.target - Timer Units. Sep 4 05:19:49.240968 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 05:19:49.244075 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 05:19:49.247346 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 05:19:49.248764 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 05:19:49.249984 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 05:19:49.257045 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 05:19:49.258733 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 05:19:49.260732 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 05:19:49.262627 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 05:19:49.263618 systemd[1]: Reached target basic.target - Basic System. Sep 4 05:19:49.264599 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 05:19:49.264627 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 05:19:49.265853 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 05:19:49.268011 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 05:19:49.269961 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 05:19:49.272181 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 05:19:49.286280 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 05:19:49.287438 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 05:19:49.289442 jq[1561]: false Sep 4 05:19:49.289498 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 05:19:49.291817 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 05:19:49.294093 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 05:19:49.298261 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 05:19:49.308262 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 05:19:49.308656 extend-filesystems[1562]: Found /dev/vda6 Sep 4 05:19:49.312756 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Refreshing passwd entry cache Sep 4 05:19:49.312715 oslogin_cache_refresh[1563]: Refreshing passwd entry cache Sep 4 05:19:49.314039 extend-filesystems[1562]: Found /dev/vda9 Sep 4 05:19:49.316704 extend-filesystems[1562]: Checking size of /dev/vda9 Sep 4 05:19:49.323326 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 05:19:49.323924 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Failure getting users, quitting Sep 4 05:19:49.323924 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 05:19:49.323924 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Refreshing group entry cache Sep 4 05:19:49.323706 oslogin_cache_refresh[1563]: Failure getting users, quitting Sep 4 05:19:49.323734 oslogin_cache_refresh[1563]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 05:19:49.323811 oslogin_cache_refresh[1563]: Refreshing group entry cache Sep 4 05:19:49.325465 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 05:19:49.326049 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 05:19:49.326713 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 05:19:49.328073 extend-filesystems[1562]: Resized partition /dev/vda9 Sep 4 05:19:49.331523 extend-filesystems[1587]: resize2fs 1.47.2 (1-Jan-2025) Sep 4 05:19:49.328919 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 05:19:49.332143 oslogin_cache_refresh[1563]: Failure getting groups, quitting Sep 4 05:19:49.333691 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Failure getting groups, quitting Sep 4 05:19:49.333691 google_oslogin_nss_cache[1563]: oslogin_cache_refresh[1563]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 05:19:49.332156 oslogin_cache_refresh[1563]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 05:19:49.340451 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 05:19:49.340688 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 05:19:49.342968 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 05:19:49.343227 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 05:19:49.343597 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 05:19:49.343835 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 05:19:49.346840 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 05:19:49.347106 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 05:19:49.349525 jq[1588]: true Sep 4 05:19:49.350941 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 05:19:49.351240 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 05:19:49.365764 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 05:19:49.375408 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 05:19:49.395910 jq[1592]: true Sep 4 05:19:49.396076 update_engine[1586]: I20250904 05:19:49.384012 1586 main.cc:92] Flatcar Update Engine starting Sep 4 05:19:49.398820 extend-filesystems[1587]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 05:19:49.398820 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 05:19:49.398820 extend-filesystems[1587]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 05:19:49.403845 extend-filesystems[1562]: Resized filesystem in /dev/vda9 Sep 4 05:19:49.400739 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 05:19:49.412016 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 05:19:49.421958 tar[1590]: linux-amd64/LICENSE Sep 4 05:19:49.424325 tar[1590]: linux-amd64/helm Sep 4 05:19:49.447267 dbus-daemon[1559]: [system] SELinux support is enabled Sep 4 05:19:49.447842 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 05:19:49.451953 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 05:19:49.452088 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 05:19:49.453921 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 05:19:49.453944 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 05:19:49.459524 systemd[1]: Started update-engine.service - Update Engine. Sep 4 05:19:49.460404 update_engine[1586]: I20250904 05:19:49.460150 1586 update_check_scheduler.cc:74] Next update check in 10m56s Sep 4 05:19:49.460458 sshd_keygen[1583]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 05:19:49.463097 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 05:19:49.475984 systemd-logind[1582]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 05:19:49.476041 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 05:19:49.479428 systemd-logind[1582]: New seat seat0. Sep 4 05:19:49.481408 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 05:19:49.485151 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Sep 4 05:19:49.535553 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 05:19:49.644358 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 05:19:49.651790 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 05:19:49.653893 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 05:19:49.677956 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 05:19:49.678283 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 05:19:49.689612 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 05:19:49.697746 locksmithd[1624]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 05:19:49.704014 containerd[1593]: time="2025-09-04T05:19:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 05:19:49.705421 containerd[1593]: time="2025-09-04T05:19:49.705357892Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 4 05:19:49.715365 containerd[1593]: time="2025-09-04T05:19:49.715303968Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.078µs" Sep 4 05:19:49.715365 containerd[1593]: time="2025-09-04T05:19:49.715336970Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 05:19:49.715365 containerd[1593]: time="2025-09-04T05:19:49.715355906Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 05:19:49.715645 containerd[1593]: time="2025-09-04T05:19:49.715606836Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 05:19:49.715645 containerd[1593]: time="2025-09-04T05:19:49.715631032Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 05:19:49.715707 containerd[1593]: time="2025-09-04T05:19:49.715658914Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 05:19:49.716134 containerd[1593]: time="2025-09-04T05:19:49.715771395Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 05:19:49.716134 containerd[1593]: time="2025-09-04T05:19:49.715789629Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 05:19:49.716338 containerd[1593]: time="2025-09-04T05:19:49.716271293Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 05:19:49.716338 containerd[1593]: time="2025-09-04T05:19:49.716304014Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 05:19:49.716421 containerd[1593]: time="2025-09-04T05:19:49.716340072Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 05:19:49.716421 containerd[1593]: time="2025-09-04T05:19:49.716349990Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 05:19:49.716556 containerd[1593]: time="2025-09-04T05:19:49.716526662Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 05:19:49.716850 containerd[1593]: time="2025-09-04T05:19:49.716822917Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 05:19:49.717462 containerd[1593]: time="2025-09-04T05:19:49.716862461Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 05:19:49.717462 containerd[1593]: time="2025-09-04T05:19:49.717458750Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 05:19:49.717556 containerd[1593]: time="2025-09-04T05:19:49.717521467Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 05:19:49.717893 containerd[1593]: time="2025-09-04T05:19:49.717857818Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 05:19:49.717991 containerd[1593]: time="2025-09-04T05:19:49.717962395Z" level=info msg="metadata content store policy set" policy=shared Sep 4 05:19:49.718336 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 05:19:49.722175 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 05:19:49.724770 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 05:19:49.726028 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 05:19:49.900618 containerd[1593]: time="2025-09-04T05:19:49.900481178Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 05:19:49.900618 containerd[1593]: time="2025-09-04T05:19:49.900572349Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 05:19:49.900618 containerd[1593]: time="2025-09-04T05:19:49.900619086Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 05:19:49.900782 containerd[1593]: time="2025-09-04T05:19:49.900631850Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 05:19:49.900782 containerd[1593]: time="2025-09-04T05:19:49.900644304Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 05:19:49.900782 containerd[1593]: time="2025-09-04T05:19:49.900653781Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 05:19:49.900782 containerd[1593]: time="2025-09-04T05:19:49.900700168Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 05:19:49.900782 containerd[1593]: time="2025-09-04T05:19:49.900713814Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 05:19:49.900782 containerd[1593]: time="2025-09-04T05:19:49.900730796Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 05:19:49.900782 containerd[1593]: time="2025-09-04T05:19:49.900746125Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 05:19:49.900782 containerd[1593]: time="2025-09-04T05:19:49.900762856Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 05:19:49.900925 containerd[1593]: time="2025-09-04T05:19:49.900814122Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 05:19:49.901079 containerd[1593]: time="2025-09-04T05:19:49.901045957Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 05:19:49.901079 containerd[1593]: time="2025-09-04T05:19:49.901073068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 05:19:49.901133 containerd[1593]: time="2025-09-04T05:19:49.901116068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 05:19:49.901180 containerd[1593]: time="2025-09-04T05:19:49.901149351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 05:19:49.901220 containerd[1593]: time="2025-09-04T05:19:49.901199495Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 05:19:49.901220 containerd[1593]: time="2025-09-04T05:19:49.901219362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 05:19:49.901271 containerd[1593]: time="2025-09-04T05:19:49.901230934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 05:19:49.901271 containerd[1593]: time="2025-09-04T05:19:49.901240973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 05:19:49.901342 containerd[1593]: time="2025-09-04T05:19:49.901270739Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 05:19:49.901342 containerd[1593]: time="2025-09-04T05:19:49.901281739Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 05:19:49.901342 containerd[1593]: time="2025-09-04T05:19:49.901291678Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 05:19:49.901494 containerd[1593]: time="2025-09-04T05:19:49.901471415Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 05:19:49.901494 containerd[1593]: time="2025-09-04T05:19:49.901493777Z" level=info msg="Start snapshots syncer" Sep 4 05:19:49.901561 containerd[1593]: time="2025-09-04T05:19:49.901550884Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 05:19:49.901849 containerd[1593]: time="2025-09-04T05:19:49.901812695Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 05:19:49.902139 containerd[1593]: time="2025-09-04T05:19:49.901871125Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 05:19:49.902139 containerd[1593]: time="2025-09-04T05:19:49.901967085Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 05:19:49.902139 containerd[1593]: time="2025-09-04T05:19:49.902123568Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 05:19:49.902207 containerd[1593]: time="2025-09-04T05:19:49.902142764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 05:19:49.902207 containerd[1593]: time="2025-09-04T05:19:49.902153104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 05:19:49.902250 containerd[1593]: time="2025-09-04T05:19:49.902211393Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 05:19:49.902275 containerd[1593]: time="2025-09-04T05:19:49.902250546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 05:19:49.902275 containerd[1593]: time="2025-09-04T05:19:49.902265915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 05:19:49.902316 containerd[1593]: time="2025-09-04T05:19:49.902276495Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 05:19:49.902316 containerd[1593]: time="2025-09-04T05:19:49.902297805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 05:19:49.902354 containerd[1593]: time="2025-09-04T05:19:49.902313484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 05:19:49.902354 containerd[1593]: time="2025-09-04T05:19:49.902334454Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 05:19:49.902636 containerd[1593]: time="2025-09-04T05:19:49.902578361Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903098156Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903147689Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903164120Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903177906Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903191672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903205357Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903227439Z" level=info msg="runtime interface created" Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903233029Z" level=info msg="created NRI interface" Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903244481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903255592Z" level=info msg="Connect containerd service" Sep 4 05:19:49.904127 containerd[1593]: time="2025-09-04T05:19:49.903887948Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 05:19:49.906174 containerd[1593]: time="2025-09-04T05:19:49.906144019Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 05:19:49.926432 tar[1590]: linux-amd64/README.md Sep 4 05:19:49.969032 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 05:19:50.073333 containerd[1593]: time="2025-09-04T05:19:50.073269048Z" level=info msg="Start subscribing containerd event" Sep 4 05:19:50.073500 containerd[1593]: time="2025-09-04T05:19:50.073350531Z" level=info msg="Start recovering state" Sep 4 05:19:50.073500 containerd[1593]: time="2025-09-04T05:19:50.073490834Z" level=info msg="Start event monitor" Sep 4 05:19:50.073542 containerd[1593]: time="2025-09-04T05:19:50.073510832Z" level=info msg="Start cni network conf syncer for default" Sep 4 05:19:50.073542 containerd[1593]: time="2025-09-04T05:19:50.073519107Z" level=info msg="Start streaming server" Sep 4 05:19:50.073614 containerd[1593]: time="2025-09-04T05:19:50.073512375Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 05:19:50.073614 containerd[1593]: time="2025-09-04T05:19:50.073604948Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 05:19:50.073654 containerd[1593]: time="2025-09-04T05:19:50.073530338Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 05:19:50.073654 containerd[1593]: time="2025-09-04T05:19:50.073630556Z" level=info msg="runtime interface starting up..." Sep 4 05:19:50.073654 containerd[1593]: time="2025-09-04T05:19:50.073636247Z" level=info msg="starting plugins..." Sep 4 05:19:50.073707 containerd[1593]: time="2025-09-04T05:19:50.073665512Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 05:19:50.073891 containerd[1593]: time="2025-09-04T05:19:50.073841031Z" level=info msg="containerd successfully booted in 0.370347s" Sep 4 05:19:50.073961 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 05:19:50.903690 systemd-networkd[1492]: eth0: Gained IPv6LL Sep 4 05:19:50.908063 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 05:19:50.910364 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 05:19:50.913741 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 05:19:50.916652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 05:19:50.942098 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 05:19:51.045854 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 05:19:51.048612 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 05:19:51.048984 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 05:19:51.051865 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 05:19:52.210604 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 05:19:52.213854 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:51962.service - OpenSSH per-connection server daemon (10.0.0.1:51962). Sep 4 05:19:52.297788 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 51962 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:19:52.300053 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:19:52.307105 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 05:19:52.342325 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 05:19:52.350687 systemd-logind[1582]: New session 1 of user core. Sep 4 05:19:52.377627 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 05:19:52.385149 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 05:19:52.393925 (systemd)[1699]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 05:19:52.394425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 05:19:52.396132 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 05:19:52.399973 systemd-logind[1582]: New session c1 of user core. Sep 4 05:19:52.401487 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 05:19:52.548532 systemd[1699]: Queued start job for default target default.target. Sep 4 05:19:52.684094 systemd[1699]: Created slice app.slice - User Application Slice. Sep 4 05:19:52.684126 systemd[1699]: Reached target paths.target - Paths. Sep 4 05:19:52.684175 systemd[1699]: Reached target timers.target - Timers. Sep 4 05:19:52.685865 systemd[1699]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 05:19:52.698588 systemd[1699]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 05:19:52.698774 systemd[1699]: Reached target sockets.target - Sockets. Sep 4 05:19:52.698831 systemd[1699]: Reached target basic.target - Basic System. Sep 4 05:19:52.698928 systemd[1699]: Reached target default.target - Main User Target. Sep 4 05:19:52.698981 systemd[1699]: Startup finished in 290ms. Sep 4 05:19:52.699365 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 05:19:52.704693 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 05:19:52.706393 systemd[1]: Startup finished in 3.361s (kernel) + 7.283s (initrd) + 6.452s (userspace) = 17.097s. Sep 4 05:19:52.767242 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:51976.service - OpenSSH per-connection server daemon (10.0.0.1:51976). Sep 4 05:19:52.829005 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 51976 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:19:52.831372 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:19:52.836287 systemd-logind[1582]: New session 2 of user core. Sep 4 05:19:52.997659 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 05:19:53.056583 sshd[1724]: Connection closed by 10.0.0.1 port 51976 Sep 4 05:19:53.058068 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Sep 4 05:19:53.065784 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:51976.service: Deactivated successfully. Sep 4 05:19:53.067999 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 05:19:53.069108 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Sep 4 05:19:53.073015 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:51986.service - OpenSSH per-connection server daemon (10.0.0.1:51986). Sep 4 05:19:53.073924 systemd-logind[1582]: Removed session 2. Sep 4 05:19:53.131770 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 51986 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:19:53.133620 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:19:53.138319 systemd-logind[1582]: New session 3 of user core. Sep 4 05:19:53.150629 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 05:19:53.202832 sshd[1734]: Connection closed by 10.0.0.1 port 51986 Sep 4 05:19:53.204794 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Sep 4 05:19:53.215412 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:51986.service: Deactivated successfully. Sep 4 05:19:53.218173 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 05:19:53.219028 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Sep 4 05:19:53.222590 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:51996.service - OpenSSH per-connection server daemon (10.0.0.1:51996). Sep 4 05:19:53.223242 systemd-logind[1582]: Removed session 3. Sep 4 05:19:53.299184 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 51996 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:19:53.301486 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:19:53.306795 systemd-logind[1582]: New session 4 of user core. Sep 4 05:19:53.317617 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 05:19:53.331057 kubelet[1701]: E0904 05:19:53.330981 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 05:19:53.336738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 05:19:53.336971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 05:19:53.337489 systemd[1]: kubelet.service: Consumed 2.182s CPU time, 264.2M memory peak. Sep 4 05:19:53.377429 sshd[1743]: Connection closed by 10.0.0.1 port 51996 Sep 4 05:19:53.377798 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Sep 4 05:19:53.386825 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:51996.service: Deactivated successfully. Sep 4 05:19:53.389064 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 05:19:53.390016 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Sep 4 05:19:53.392848 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:52002.service - OpenSSH per-connection server daemon (10.0.0.1:52002). Sep 4 05:19:53.393491 systemd-logind[1582]: Removed session 4. Sep 4 05:19:53.460399 sshd[1750]: Accepted publickey for core from 10.0.0.1 port 52002 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:19:53.461975 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:19:53.466911 systemd-logind[1582]: New session 5 of user core. Sep 4 05:19:53.480733 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 05:19:53.541249 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 05:19:53.541665 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 05:19:53.573287 sudo[1755]: pam_unix(sudo:session): session closed for user root Sep 4 05:19:53.575254 sshd[1754]: Connection closed by 10.0.0.1 port 52002 Sep 4 05:19:53.575727 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Sep 4 05:19:53.585154 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:52002.service: Deactivated successfully. Sep 4 05:19:53.587120 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 05:19:53.587977 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Sep 4 05:19:53.591184 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:52006.service - OpenSSH per-connection server daemon (10.0.0.1:52006). Sep 4 05:19:53.591737 systemd-logind[1582]: Removed session 5. Sep 4 05:19:53.647915 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 52006 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:19:53.649190 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:19:53.653972 systemd-logind[1582]: New session 6 of user core. Sep 4 05:19:53.663515 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 05:19:53.717686 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 05:19:53.717997 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 05:19:53.725075 sudo[1766]: pam_unix(sudo:session): session closed for user root Sep 4 05:19:53.731897 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 05:19:53.732214 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 05:19:53.742669 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 05:19:53.789028 augenrules[1788]: No rules Sep 4 05:19:53.790922 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 05:19:53.791211 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 05:19:53.792581 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 4 05:19:53.794410 sshd[1764]: Connection closed by 10.0.0.1 port 52006 Sep 4 05:19:53.794764 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Sep 4 05:19:53.807679 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:52006.service: Deactivated successfully. Sep 4 05:19:53.809617 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 05:19:53.810341 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Sep 4 05:19:53.813144 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:52018.service - OpenSSH per-connection server daemon (10.0.0.1:52018). Sep 4 05:19:53.813936 systemd-logind[1582]: Removed session 6. Sep 4 05:19:53.869550 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 52018 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:19:53.871007 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:19:53.875687 systemd-logind[1582]: New session 7 of user core. Sep 4 05:19:53.889593 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 05:19:53.943225 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 05:19:53.943582 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 05:19:54.710521 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 05:19:54.776890 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 05:19:55.264362 dockerd[1821]: time="2025-09-04T05:19:55.264275843Z" level=info msg="Starting up" Sep 4 05:19:55.265356 dockerd[1821]: time="2025-09-04T05:19:55.265332555Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 05:19:55.284525 dockerd[1821]: time="2025-09-04T05:19:55.284478548Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 4 05:19:58.590417 dockerd[1821]: time="2025-09-04T05:19:58.590273674Z" level=info msg="Loading containers: start." Sep 4 05:19:58.625594 kernel: Initializing XFRM netlink socket Sep 4 05:19:59.166556 systemd-networkd[1492]: docker0: Link UP Sep 4 05:19:59.463323 dockerd[1821]: time="2025-09-04T05:19:59.463138730Z" level=info msg="Loading containers: done." Sep 4 05:19:59.485984 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1180081657-merged.mount: Deactivated successfully. Sep 4 05:19:59.545563 dockerd[1821]: time="2025-09-04T05:19:59.545353642Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 05:19:59.545563 dockerd[1821]: time="2025-09-04T05:19:59.545552294Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 4 05:19:59.545753 dockerd[1821]: time="2025-09-04T05:19:59.545673662Z" level=info msg="Initializing buildkit" Sep 4 05:19:59.678626 dockerd[1821]: time="2025-09-04T05:19:59.678547134Z" level=info msg="Completed buildkit initialization" Sep 4 05:19:59.685269 dockerd[1821]: time="2025-09-04T05:19:59.685220161Z" level=info msg="Daemon has completed initialization" Sep 4 05:19:59.686425 dockerd[1821]: time="2025-09-04T05:19:59.685312975Z" level=info msg="API listen on /run/docker.sock" Sep 4 05:19:59.685531 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 05:20:00.871057 containerd[1593]: time="2025-09-04T05:20:00.870984306Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 4 05:20:02.574898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643705234.mount: Deactivated successfully. Sep 4 05:20:03.373357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 05:20:03.375629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 05:20:03.691539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 05:20:03.711750 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 05:20:03.793286 kubelet[2102]: E0904 05:20:03.793213 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 05:20:03.801679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 05:20:03.801870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 05:20:03.802251 systemd[1]: kubelet.service: Consumed 372ms CPU time, 110.7M memory peak. Sep 4 05:20:04.315539 containerd[1593]: time="2025-09-04T05:20:04.315453956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:04.362442 containerd[1593]: time="2025-09-04T05:20:04.362370518Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 4 05:20:04.408597 containerd[1593]: time="2025-09-04T05:20:04.408526414Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:04.441992 containerd[1593]: time="2025-09-04T05:20:04.441915462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:04.442793 containerd[1593]: time="2025-09-04T05:20:04.442739768Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 3.571679379s" Sep 4 05:20:04.442793 containerd[1593]: time="2025-09-04T05:20:04.442791315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 4 05:20:04.443667 containerd[1593]: time="2025-09-04T05:20:04.443446774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 4 05:20:07.083864 containerd[1593]: time="2025-09-04T05:20:07.083768914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:07.084505 containerd[1593]: time="2025-09-04T05:20:07.084452376Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 4 05:20:07.086075 containerd[1593]: time="2025-09-04T05:20:07.086043761Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:07.089901 containerd[1593]: time="2025-09-04T05:20:07.089846874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:07.091044 containerd[1593]: time="2025-09-04T05:20:07.090998164Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 2.647516835s" Sep 4 05:20:07.091112 containerd[1593]: time="2025-09-04T05:20:07.091057916Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 4 05:20:07.092089 containerd[1593]: time="2025-09-04T05:20:07.092017957Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 4 05:20:08.802299 containerd[1593]: time="2025-09-04T05:20:08.802214797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:08.803351 containerd[1593]: time="2025-09-04T05:20:08.803324218Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 4 05:20:08.804663 containerd[1593]: time="2025-09-04T05:20:08.804607615Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:08.807482 containerd[1593]: time="2025-09-04T05:20:08.807443695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:08.808421 containerd[1593]: time="2025-09-04T05:20:08.808330869Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.716270062s" Sep 4 05:20:08.808421 containerd[1593]: time="2025-09-04T05:20:08.808413043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 4 05:20:08.809182 containerd[1593]: time="2025-09-04T05:20:08.808954168Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 4 05:20:09.932086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2878516760.mount: Deactivated successfully. Sep 4 05:20:10.523721 containerd[1593]: time="2025-09-04T05:20:10.523652703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:10.524393 containerd[1593]: time="2025-09-04T05:20:10.524347857Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 4 05:20:10.525458 containerd[1593]: time="2025-09-04T05:20:10.525414688Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:10.527349 containerd[1593]: time="2025-09-04T05:20:10.527293031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:10.527729 containerd[1593]: time="2025-09-04T05:20:10.527676029Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 1.718690592s" Sep 4 05:20:10.527729 containerd[1593]: time="2025-09-04T05:20:10.527721454Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 4 05:20:10.528234 containerd[1593]: time="2025-09-04T05:20:10.528204210Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 4 05:20:11.478536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198873254.mount: Deactivated successfully. Sep 4 05:20:13.078747 containerd[1593]: time="2025-09-04T05:20:13.078673161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:13.080458 containerd[1593]: time="2025-09-04T05:20:13.080411171Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 4 05:20:13.081646 containerd[1593]: time="2025-09-04T05:20:13.081610130Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:13.084658 containerd[1593]: time="2025-09-04T05:20:13.084621839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:13.085503 containerd[1593]: time="2025-09-04T05:20:13.085457857Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.557221407s" Sep 4 05:20:13.085503 containerd[1593]: time="2025-09-04T05:20:13.085486721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 4 05:20:13.086082 containerd[1593]: time="2025-09-04T05:20:13.085979054Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 05:20:13.873673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 05:20:13.875818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 05:20:14.149012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 05:20:14.160924 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 05:20:14.223123 kubelet[2187]: E0904 05:20:14.223054 2187 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 05:20:14.227430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 05:20:14.227647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 05:20:14.228060 systemd[1]: kubelet.service: Consumed 285ms CPU time, 111.2M memory peak. Sep 4 05:20:14.637996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924539474.mount: Deactivated successfully. Sep 4 05:20:14.645426 containerd[1593]: time="2025-09-04T05:20:14.645320664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 05:20:14.646187 containerd[1593]: time="2025-09-04T05:20:14.646151843Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 05:20:14.647520 containerd[1593]: time="2025-09-04T05:20:14.647474333Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 05:20:14.652635 containerd[1593]: time="2025-09-04T05:20:14.652553981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 05:20:14.653137 containerd[1593]: time="2025-09-04T05:20:14.653097972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.567094671s" Sep 4 05:20:14.653137 containerd[1593]: time="2025-09-04T05:20:14.653126575Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 05:20:14.653852 containerd[1593]: time="2025-09-04T05:20:14.653662450Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 4 05:20:15.334958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444648097.mount: Deactivated successfully. Sep 4 05:20:19.531264 containerd[1593]: time="2025-09-04T05:20:19.531170992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:19.706684 containerd[1593]: time="2025-09-04T05:20:19.706609785Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 4 05:20:19.765833 containerd[1593]: time="2025-09-04T05:20:19.765757259Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:19.791218 containerd[1593]: time="2025-09-04T05:20:19.791084436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:19.792589 containerd[1593]: time="2025-09-04T05:20:19.792555024Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.138862377s" Sep 4 05:20:19.792681 containerd[1593]: time="2025-09-04T05:20:19.792593526Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 4 05:20:22.119482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 05:20:22.119679 systemd[1]: kubelet.service: Consumed 285ms CPU time, 111.2M memory peak. Sep 4 05:20:22.122152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 05:20:22.155936 systemd[1]: Reload requested from client PID 2284 ('systemctl') (unit session-7.scope)... Sep 4 05:20:22.155957 systemd[1]: Reloading... Sep 4 05:20:22.257453 zram_generator::config[2330]: No configuration found. Sep 4 05:20:23.057701 systemd[1]: Reloading finished in 901 ms. Sep 4 05:20:23.129082 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 05:20:23.129184 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 05:20:23.129522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 05:20:23.129560 systemd[1]: kubelet.service: Consumed 172ms CPU time, 98.3M memory peak. Sep 4 05:20:23.131060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 05:20:23.314761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 05:20:23.326722 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 05:20:23.377588 kubelet[2375]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 05:20:23.377588 kubelet[2375]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 05:20:23.377588 kubelet[2375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 05:20:23.378153 kubelet[2375]: I0904 05:20:23.377626 2375 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 05:20:23.760035 kubelet[2375]: I0904 05:20:23.759970 2375 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 05:20:23.760035 kubelet[2375]: I0904 05:20:23.760009 2375 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 05:20:23.760316 kubelet[2375]: I0904 05:20:23.760292 2375 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 05:20:23.794122 kubelet[2375]: E0904 05:20:23.794082 2375 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Sep 4 05:20:23.798845 kubelet[2375]: I0904 05:20:23.798807 2375 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 05:20:23.812990 kubelet[2375]: I0904 05:20:23.812971 2375 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 05:20:23.818462 kubelet[2375]: I0904 05:20:23.818435 2375 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 05:20:23.823833 kubelet[2375]: I0904 05:20:23.823783 2375 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 05:20:23.824048 kubelet[2375]: I0904 05:20:23.823823 2375 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 05:20:23.824148 kubelet[2375]: I0904 05:20:23.824060 2375 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 05:20:23.824148 kubelet[2375]: I0904 05:20:23.824070 2375 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 05:20:23.824245 kubelet[2375]: I0904 05:20:23.824226 2375 state_mem.go:36] "Initialized new in-memory state store" Sep 4 05:20:23.830174 kubelet[2375]: I0904 05:20:23.830117 2375 kubelet.go:446] "Attempting to sync node with API server" Sep 4 05:20:23.830225 kubelet[2375]: I0904 05:20:23.830209 2375 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 05:20:23.830279 kubelet[2375]: I0904 05:20:23.830262 2375 kubelet.go:352] "Adding apiserver pod source" Sep 4 05:20:23.830302 kubelet[2375]: I0904 05:20:23.830288 2375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 05:20:23.836052 kubelet[2375]: W0904 05:20:23.835997 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Sep 4 05:20:23.836103 kubelet[2375]: E0904 05:20:23.836070 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Sep 4 05:20:23.837626 kubelet[2375]: W0904 05:20:23.837545 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Sep 4 05:20:23.837694 kubelet[2375]: E0904 05:20:23.837641 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Sep 4 05:20:23.839060 kubelet[2375]: I0904 05:20:23.839019 2375 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 05:20:23.839481 kubelet[2375]: I0904 05:20:23.839459 2375 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 05:20:23.840275 kubelet[2375]: W0904 05:20:23.840242 2375 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 05:20:23.843602 kubelet[2375]: I0904 05:20:23.843570 2375 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 05:20:23.843663 kubelet[2375]: I0904 05:20:23.843615 2375 server.go:1287] "Started kubelet" Sep 4 05:20:23.843998 kubelet[2375]: I0904 05:20:23.843951 2375 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 05:20:23.844955 kubelet[2375]: I0904 05:20:23.844921 2375 server.go:479] "Adding debug handlers to kubelet server" Sep 4 05:20:23.847259 kubelet[2375]: I0904 05:20:23.847187 2375 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 05:20:23.847522 kubelet[2375]: I0904 05:20:23.847491 2375 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 05:20:23.861625 kubelet[2375]: I0904 05:20:23.861601 2375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 05:20:23.862026 kubelet[2375]: I0904 05:20:23.861944 2375 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 05:20:23.864890 kubelet[2375]: I0904 05:20:23.863368 2375 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 05:20:23.864944 kubelet[2375]: I0904 05:20:23.863535 2375 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 05:20:23.864944 kubelet[2375]: W0904 05:20:23.864203 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Sep 4 05:20:23.864944 kubelet[2375]: E0904 05:20:23.864934 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Sep 4 05:20:23.864944 kubelet[2375]: E0904 05:20:23.864237 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:23.865049 kubelet[2375]: I0904 05:20:23.865027 2375 reconciler.go:26] "Reconciler: start to sync state" Sep 4 05:20:23.865613 kubelet[2375]: E0904 05:20:23.865242 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="200ms" Sep 4 05:20:23.866188 kubelet[2375]: E0904 05:20:23.866157 2375 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 05:20:23.867318 kubelet[2375]: I0904 05:20:23.867292 2375 factory.go:221] Registration of the containerd container factory successfully Sep 4 05:20:23.867318 kubelet[2375]: I0904 05:20:23.867308 2375 factory.go:221] Registration of the systemd container factory successfully Sep 4 05:20:23.867430 kubelet[2375]: I0904 05:20:23.867408 2375 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 05:20:23.872154 kubelet[2375]: E0904 05:20:23.866900 2375 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861fcbe489f5cbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 05:20:23.843593407 +0000 UTC m=+0.510696881,LastTimestamp:2025-09-04 05:20:23.843593407 +0000 UTC m=+0.510696881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 05:20:23.886291 kubelet[2375]: I0904 05:20:23.886252 2375 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 05:20:23.886291 kubelet[2375]: I0904 05:20:23.886274 2375 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 05:20:23.886291 kubelet[2375]: I0904 05:20:23.886290 2375 state_mem.go:36] "Initialized new in-memory state store" Sep 4 05:20:23.965363 kubelet[2375]: E0904 05:20:23.965314 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:24.065792 kubelet[2375]: E0904 05:20:24.065687 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:24.066061 kubelet[2375]: E0904 05:20:24.066026 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="400ms" Sep 4 05:20:24.166523 kubelet[2375]: E0904 05:20:24.166487 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:24.267212 kubelet[2375]: E0904 05:20:24.267129 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:24.367422 kubelet[2375]: E0904 05:20:24.367250 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:24.467303 kubelet[2375]: E0904 05:20:24.467244 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="800ms" Sep 4 05:20:24.468400 kubelet[2375]: E0904 05:20:24.468348 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:24.569090 kubelet[2375]: E0904 05:20:24.569052 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:24.669865 kubelet[2375]: E0904 05:20:24.669805 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:24.770519 kubelet[2375]: E0904 05:20:24.770435 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:24.871550 kubelet[2375]: E0904 05:20:24.871494 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:24.960278 kubelet[2375]: W0904 05:20:24.960186 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Sep 4 05:20:24.960278 kubelet[2375]: E0904 05:20:24.960221 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Sep 4 05:20:24.972037 kubelet[2375]: E0904 05:20:24.972006 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:25.018545 kubelet[2375]: W0904 05:20:25.018481 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Sep 4 05:20:25.018624 kubelet[2375]: E0904 05:20:25.018547 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Sep 4 05:20:25.072314 kubelet[2375]: E0904 05:20:25.072262 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:25.173113 kubelet[2375]: E0904 05:20:25.173048 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:25.268179 kubelet[2375]: E0904 05:20:25.268044 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="1.6s" Sep 4 05:20:25.273231 kubelet[2375]: E0904 05:20:25.273191 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:25.356425 kubelet[2375]: W0904 05:20:25.356340 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Sep 4 05:20:25.356533 kubelet[2375]: E0904 05:20:25.356438 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Sep 4 05:20:25.374193 kubelet[2375]: E0904 05:20:25.374162 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:25.474919 kubelet[2375]: E0904 05:20:25.474851 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:25.575696 kubelet[2375]: E0904 05:20:25.575554 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:25.634456 kubelet[2375]: I0904 05:20:25.634395 2375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 05:20:25.635944 kubelet[2375]: I0904 05:20:25.635922 2375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 05:20:25.635998 kubelet[2375]: I0904 05:20:25.635975 2375 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 05:20:25.637167 kubelet[2375]: I0904 05:20:25.636161 2375 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 05:20:25.637167 kubelet[2375]: I0904 05:20:25.636177 2375 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 05:20:25.637167 kubelet[2375]: E0904 05:20:25.636238 2375 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 05:20:25.637167 kubelet[2375]: W0904 05:20:25.636560 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Sep 4 05:20:25.637167 kubelet[2375]: E0904 05:20:25.636600 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Sep 4 05:20:25.676250 kubelet[2375]: E0904 05:20:25.676221 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:25.736407 kubelet[2375]: E0904 05:20:25.736350 2375 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 05:20:25.775174 kubelet[2375]: I0904 05:20:25.775127 2375 policy_none.go:49] "None policy: Start" Sep 4 05:20:25.775240 kubelet[2375]: I0904 05:20:25.775206 2375 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 05:20:25.775240 kubelet[2375]: I0904 05:20:25.775234 2375 state_mem.go:35] "Initializing new in-memory state store" Sep 4 05:20:25.776977 kubelet[2375]: E0904 05:20:25.776937 2375 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:25.833175 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 05:20:25.846985 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 05:20:25.850682 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 05:20:25.858407 kubelet[2375]: I0904 05:20:25.858345 2375 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 05:20:25.858635 kubelet[2375]: I0904 05:20:25.858608 2375 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 05:20:25.858716 kubelet[2375]: I0904 05:20:25.858636 2375 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 05:20:25.858958 kubelet[2375]: I0904 05:20:25.858872 2375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 05:20:25.859661 kubelet[2375]: E0904 05:20:25.859640 2375 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 05:20:25.859725 kubelet[2375]: E0904 05:20:25.859685 2375 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 05:20:25.944054 kubelet[2375]: E0904 05:20:25.943897 2375 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Sep 4 05:20:25.946463 systemd[1]: Created slice kubepods-burstable-podb4351e0f26db1ede4b566b3d7ffc6b26.slice - libcontainer container kubepods-burstable-podb4351e0f26db1ede4b566b3d7ffc6b26.slice. Sep 4 05:20:25.960996 kubelet[2375]: I0904 05:20:25.960948 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 05:20:25.961364 kubelet[2375]: E0904 05:20:25.961321 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Sep 4 05:20:25.961549 kubelet[2375]: E0904 05:20:25.961520 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 05:20:25.964841 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 4 05:20:25.975783 kubelet[2375]: I0904 05:20:25.975743 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:25.975783 kubelet[2375]: I0904 05:20:25.975770 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:25.975783 kubelet[2375]: I0904 05:20:25.975786 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:25.975900 kubelet[2375]: I0904 05:20:25.975800 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 05:20:25.975900 kubelet[2375]: I0904 05:20:25.975814 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4351e0f26db1ede4b566b3d7ffc6b26-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4351e0f26db1ede4b566b3d7ffc6b26\") " pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:25.975900 kubelet[2375]: I0904 05:20:25.975834 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4351e0f26db1ede4b566b3d7ffc6b26-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4351e0f26db1ede4b566b3d7ffc6b26\") " pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:25.975900 kubelet[2375]: I0904 05:20:25.975860 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4351e0f26db1ede4b566b3d7ffc6b26-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b4351e0f26db1ede4b566b3d7ffc6b26\") " pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:25.975900 kubelet[2375]: I0904 05:20:25.975884 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:25.976023 kubelet[2375]: I0904 05:20:25.975898 2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:25.979662 kubelet[2375]: E0904 05:20:25.979637 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 05:20:25.981581 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 4 05:20:25.983224 kubelet[2375]: E0904 05:20:25.983189 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 05:20:26.163297 kubelet[2375]: I0904 05:20:26.163260 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 05:20:26.163712 kubelet[2375]: E0904 05:20:26.163667 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Sep 4 05:20:26.263031 containerd[1593]: time="2025-09-04T05:20:26.262970188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b4351e0f26db1ede4b566b3d7ffc6b26,Namespace:kube-system,Attempt:0,}" Sep 4 05:20:26.280994 containerd[1593]: time="2025-09-04T05:20:26.280962614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 4 05:20:26.284527 containerd[1593]: time="2025-09-04T05:20:26.284499746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 4 05:20:26.443434 kubelet[2375]: W0904 05:20:26.443317 2375 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Sep 4 05:20:26.443434 kubelet[2375]: E0904 05:20:26.443354 2375 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" Sep 4 05:20:26.565288 kubelet[2375]: I0904 05:20:26.565235 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 05:20:26.565732 kubelet[2375]: E0904 05:20:26.565525 2375 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Sep 4 05:20:26.737473 containerd[1593]: time="2025-09-04T05:20:26.736979309Z" level=info msg="connecting to shim a790685114358094971f0dcef2dd3b215771641efb1d0d549540454c5744fad1" address="unix:///run/containerd/s/e59d9161890aedaf6a87f27ccd1d278aeb6664479e75db735f39a3640ceb288f" namespace=k8s.io protocol=ttrpc version=3 Sep 4 05:20:26.740718 containerd[1593]: time="2025-09-04T05:20:26.740645368Z" level=info msg="connecting to shim d7db5dd4b485058b9bb0384aa87b15a2f105d19270fe2df7364811c74a31b2a2" address="unix:///run/containerd/s/1efeac4262fea58bc2523a64283b748908ce6e173834a141044a1ee8e0a507d8" namespace=k8s.io protocol=ttrpc version=3 Sep 4 05:20:26.744914 containerd[1593]: time="2025-09-04T05:20:26.744547148Z" level=info msg="connecting to shim e8e8f08b4aef307bf8240d37098cd446da5ff10a960f19725be6d6b0c8dc3770" address="unix:///run/containerd/s/c1e11a6f79d2f5803c4f7056a626dbdb6f12009282f053bffe693ba0c0754a91" namespace=k8s.io protocol=ttrpc version=3 Sep 4 05:20:26.800618 systemd[1]: Started cri-containerd-a790685114358094971f0dcef2dd3b215771641efb1d0d549540454c5744fad1.scope - libcontainer container a790685114358094971f0dcef2dd3b215771641efb1d0d549540454c5744fad1. Sep 4 05:20:26.807650 systemd[1]: Started cri-containerd-d7db5dd4b485058b9bb0384aa87b15a2f105d19270fe2df7364811c74a31b2a2.scope - libcontainer container d7db5dd4b485058b9bb0384aa87b15a2f105d19270fe2df7364811c74a31b2a2. Sep 4 05:20:26.827508 systemd[1]: Started cri-containerd-e8e8f08b4aef307bf8240d37098cd446da5ff10a960f19725be6d6b0c8dc3770.scope - libcontainer container e8e8f08b4aef307bf8240d37098cd446da5ff10a960f19725be6d6b0c8dc3770. Sep 4 05:20:26.868673 kubelet[2375]: E0904 05:20:26.868603 2375 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="3.2s" Sep 4 05:20:26.903403 containerd[1593]: time="2025-09-04T05:20:26.903328685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7db5dd4b485058b9bb0384aa87b15a2f105d19270fe2df7364811c74a31b2a2\"" Sep 4 05:20:26.909686 containerd[1593]: time="2025-09-04T05:20:26.909651140Z" level=info msg="CreateContainer within sandbox \"d7db5dd4b485058b9bb0384aa87b15a2f105d19270fe2df7364811c74a31b2a2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 05:20:26.910021 containerd[1593]: time="2025-09-04T05:20:26.909946776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b4351e0f26db1ede4b566b3d7ffc6b26,Namespace:kube-system,Attempt:0,} returns sandbox id \"a790685114358094971f0dcef2dd3b215771641efb1d0d549540454c5744fad1\"" Sep 4 05:20:26.913643 containerd[1593]: time="2025-09-04T05:20:26.913600861Z" level=info msg="CreateContainer within sandbox \"a790685114358094971f0dcef2dd3b215771641efb1d0d549540454c5744fad1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 05:20:26.915132 containerd[1593]: time="2025-09-04T05:20:26.915074572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8e8f08b4aef307bf8240d37098cd446da5ff10a960f19725be6d6b0c8dc3770\"" Sep 4 05:20:26.917405 containerd[1593]: time="2025-09-04T05:20:26.917334389Z" level=info msg="CreateContainer within sandbox \"e8e8f08b4aef307bf8240d37098cd446da5ff10a960f19725be6d6b0c8dc3770\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 05:20:26.933179 containerd[1593]: time="2025-09-04T05:20:26.933107657Z" level=info msg="Container b640f756c1d71f8574edefb31e721267b77ecfa4f9c7c351d7d80f49f7676a47: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:20:26.933890 containerd[1593]: time="2025-09-04T05:20:26.933856430Z" level=info msg="Container cf8e45285938c9dbfe6770c518ab58cc24b3161ed82ddff7e3bdc6c466df025f: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:20:26.936765 containerd[1593]: time="2025-09-04T05:20:26.936728960Z" level=info msg="Container be7ef9792e98925710809357f4b43c6960b9493e2fcbbdf9b0cce00de45be670: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:20:26.946660 containerd[1593]: time="2025-09-04T05:20:26.946607363Z" level=info msg="CreateContainer within sandbox \"e8e8f08b4aef307bf8240d37098cd446da5ff10a960f19725be6d6b0c8dc3770\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be7ef9792e98925710809357f4b43c6960b9493e2fcbbdf9b0cce00de45be670\"" Sep 4 05:20:26.947328 containerd[1593]: time="2025-09-04T05:20:26.947285721Z" level=info msg="StartContainer for \"be7ef9792e98925710809357f4b43c6960b9493e2fcbbdf9b0cce00de45be670\"" Sep 4 05:20:26.948162 containerd[1593]: time="2025-09-04T05:20:26.948126210Z" level=info msg="CreateContainer within sandbox \"a790685114358094971f0dcef2dd3b215771641efb1d0d549540454c5744fad1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b640f756c1d71f8574edefb31e721267b77ecfa4f9c7c351d7d80f49f7676a47\"" Sep 4 05:20:26.948704 containerd[1593]: time="2025-09-04T05:20:26.948470379Z" level=info msg="connecting to shim be7ef9792e98925710809357f4b43c6960b9493e2fcbbdf9b0cce00de45be670" address="unix:///run/containerd/s/c1e11a6f79d2f5803c4f7056a626dbdb6f12009282f053bffe693ba0c0754a91" protocol=ttrpc version=3 Sep 4 05:20:26.948704 containerd[1593]: time="2025-09-04T05:20:26.948518411Z" level=info msg="StartContainer for \"b640f756c1d71f8574edefb31e721267b77ecfa4f9c7c351d7d80f49f7676a47\"" Sep 4 05:20:26.949585 containerd[1593]: time="2025-09-04T05:20:26.949560196Z" level=info msg="CreateContainer within sandbox \"d7db5dd4b485058b9bb0384aa87b15a2f105d19270fe2df7364811c74a31b2a2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf8e45285938c9dbfe6770c518ab58cc24b3161ed82ddff7e3bdc6c466df025f\"" Sep 4 05:20:26.949666 containerd[1593]: time="2025-09-04T05:20:26.949604079Z" level=info msg="connecting to shim b640f756c1d71f8574edefb31e721267b77ecfa4f9c7c351d7d80f49f7676a47" address="unix:///run/containerd/s/e59d9161890aedaf6a87f27ccd1d278aeb6664479e75db735f39a3640ceb288f" protocol=ttrpc version=3 Sep 4 05:20:26.950265 containerd[1593]: time="2025-09-04T05:20:26.950232112Z" level=info msg="StartContainer for \"cf8e45285938c9dbfe6770c518ab58cc24b3161ed82ddff7e3bdc6c466df025f\"" Sep 4 05:20:26.951190 containerd[1593]: time="2025-09-04T05:20:26.951156452Z" level=info msg="connecting to shim cf8e45285938c9dbfe6770c518ab58cc24b3161ed82ddff7e3bdc6c466df025f" address="unix:///run/containerd/s/1efeac4262fea58bc2523a64283b748908ce6e173834a141044a1ee8e0a507d8" protocol=ttrpc version=3 Sep 4 05:20:26.969579 systemd[1]: Started cri-containerd-b640f756c1d71f8574edefb31e721267b77ecfa4f9c7c351d7d80f49f7676a47.scope - libcontainer container b640f756c1d71f8574edefb31e721267b77ecfa4f9c7c351d7d80f49f7676a47. Sep 4 05:20:26.974242 systemd[1]: Started cri-containerd-be7ef9792e98925710809357f4b43c6960b9493e2fcbbdf9b0cce00de45be670.scope - libcontainer container be7ef9792e98925710809357f4b43c6960b9493e2fcbbdf9b0cce00de45be670. Sep 4 05:20:26.977433 systemd[1]: Started cri-containerd-cf8e45285938c9dbfe6770c518ab58cc24b3161ed82ddff7e3bdc6c466df025f.scope - libcontainer container cf8e45285938c9dbfe6770c518ab58cc24b3161ed82ddff7e3bdc6c466df025f. Sep 4 05:20:27.045854 containerd[1593]: time="2025-09-04T05:20:27.045692725Z" level=info msg="StartContainer for \"be7ef9792e98925710809357f4b43c6960b9493e2fcbbdf9b0cce00de45be670\" returns successfully" Sep 4 05:20:27.047970 containerd[1593]: time="2025-09-04T05:20:27.047902251Z" level=info msg="StartContainer for \"cf8e45285938c9dbfe6770c518ab58cc24b3161ed82ddff7e3bdc6c466df025f\" returns successfully" Sep 4 05:20:27.048415 containerd[1593]: time="2025-09-04T05:20:27.048371458Z" level=info msg="StartContainer for \"b640f756c1d71f8574edefb31e721267b77ecfa4f9c7c351d7d80f49f7676a47\" returns successfully" Sep 4 05:20:27.367622 kubelet[2375]: I0904 05:20:27.367233 2375 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 05:20:27.645062 kubelet[2375]: E0904 05:20:27.645012 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 05:20:27.646767 kubelet[2375]: E0904 05:20:27.646741 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 05:20:27.648596 kubelet[2375]: E0904 05:20:27.648557 2375 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 05:20:28.357035 kubelet[2375]: I0904 05:20:28.356583 2375 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 05:20:28.365799 kubelet[2375]: I0904 05:20:28.365720 2375 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:28.374290 kubelet[2375]: E0904 05:20:28.374198 2375 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:28.374290 kubelet[2375]: I0904 05:20:28.374240 2375 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:28.376173 kubelet[2375]: E0904 05:20:28.376126 2375 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:28.376173 kubelet[2375]: I0904 05:20:28.376151 2375 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 05:20:28.377827 kubelet[2375]: E0904 05:20:28.377803 2375 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 05:20:28.650040 kubelet[2375]: I0904 05:20:28.649988 2375 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 05:20:28.650699 kubelet[2375]: I0904 05:20:28.650116 2375 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:28.652433 kubelet[2375]: E0904 05:20:28.652388 2375 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:28.652433 kubelet[2375]: E0904 05:20:28.652421 2375 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 05:20:28.833998 kubelet[2375]: I0904 05:20:28.833955 2375 apiserver.go:52] "Watching apiserver" Sep 4 05:20:28.865967 kubelet[2375]: I0904 05:20:28.865934 2375 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 05:20:30.399035 systemd[1]: Reload requested from client PID 2648 ('systemctl') (unit session-7.scope)... Sep 4 05:20:30.399053 systemd[1]: Reloading... Sep 4 05:20:30.491416 zram_generator::config[2691]: No configuration found. Sep 4 05:20:30.735543 systemd[1]: Reloading finished in 336 ms. Sep 4 05:20:30.762826 kubelet[2375]: I0904 05:20:30.762776 2375 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 05:20:30.762849 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 05:20:30.785712 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 05:20:30.786068 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 05:20:30.786120 systemd[1]: kubelet.service: Consumed 1.119s CPU time, 134.6M memory peak. Sep 4 05:20:30.788092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 05:20:31.019435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 05:20:31.035736 (kubelet)[2736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 05:20:31.085091 kubelet[2736]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 05:20:31.085091 kubelet[2736]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 05:20:31.085091 kubelet[2736]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 05:20:31.085628 kubelet[2736]: I0904 05:20:31.085146 2736 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 05:20:31.093565 kubelet[2736]: I0904 05:20:31.093523 2736 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 4 05:20:31.093565 kubelet[2736]: I0904 05:20:31.093548 2736 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 05:20:31.093848 kubelet[2736]: I0904 05:20:31.093816 2736 server.go:954] "Client rotation is on, will bootstrap in background" Sep 4 05:20:31.094998 kubelet[2736]: I0904 05:20:31.094971 2736 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 05:20:31.097172 kubelet[2736]: I0904 05:20:31.097153 2736 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 05:20:31.101860 kubelet[2736]: I0904 05:20:31.101831 2736 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 05:20:31.107462 kubelet[2736]: I0904 05:20:31.107412 2736 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 05:20:31.107709 kubelet[2736]: I0904 05:20:31.107653 2736 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 05:20:31.107862 kubelet[2736]: I0904 05:20:31.107696 2736 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 05:20:31.107965 kubelet[2736]: I0904 05:20:31.107872 2736 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 05:20:31.107965 kubelet[2736]: I0904 05:20:31.107885 2736 container_manager_linux.go:304] "Creating device plugin manager" Sep 4 05:20:31.108033 kubelet[2736]: I0904 05:20:31.107972 2736 state_mem.go:36] "Initialized new in-memory state store" Sep 4 05:20:31.108154 kubelet[2736]: I0904 05:20:31.108135 2736 kubelet.go:446] "Attempting to sync node with API server" Sep 4 05:20:31.108203 kubelet[2736]: I0904 05:20:31.108164 2736 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 05:20:31.108203 kubelet[2736]: I0904 05:20:31.108200 2736 kubelet.go:352] "Adding apiserver pod source" Sep 4 05:20:31.108264 kubelet[2736]: I0904 05:20:31.108215 2736 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 05:20:31.109117 kubelet[2736]: I0904 05:20:31.109063 2736 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 05:20:31.109591 kubelet[2736]: I0904 05:20:31.109566 2736 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 05:20:31.110097 kubelet[2736]: I0904 05:20:31.110071 2736 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 05:20:31.110157 kubelet[2736]: I0904 05:20:31.110110 2736 server.go:1287] "Started kubelet" Sep 4 05:20:31.112414 kubelet[2736]: I0904 05:20:31.110398 2736 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 05:20:31.112414 kubelet[2736]: I0904 05:20:31.111440 2736 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 05:20:31.112414 kubelet[2736]: I0904 05:20:31.111581 2736 server.go:479] "Adding debug handlers to kubelet server" Sep 4 05:20:31.112414 kubelet[2736]: I0904 05:20:31.112087 2736 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 05:20:31.117098 kubelet[2736]: I0904 05:20:31.116901 2736 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 05:20:31.118399 kubelet[2736]: I0904 05:20:31.118013 2736 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 05:20:31.119128 kubelet[2736]: I0904 05:20:31.119100 2736 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 05:20:31.119396 kubelet[2736]: I0904 05:20:31.119320 2736 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 05:20:31.119547 kubelet[2736]: I0904 05:20:31.119493 2736 reconciler.go:26] "Reconciler: start to sync state" Sep 4 05:20:31.126423 kubelet[2736]: E0904 05:20:31.124322 2736 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 05:20:31.126423 kubelet[2736]: I0904 05:20:31.125873 2736 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 05:20:31.132749 kubelet[2736]: I0904 05:20:31.132501 2736 factory.go:221] Registration of the containerd container factory successfully Sep 4 05:20:31.132749 kubelet[2736]: I0904 05:20:31.132528 2736 factory.go:221] Registration of the systemd container factory successfully Sep 4 05:20:31.135028 kubelet[2736]: E0904 05:20:31.134988 2736 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 05:20:31.140792 kubelet[2736]: I0904 05:20:31.140753 2736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 05:20:31.143912 kubelet[2736]: I0904 05:20:31.143839 2736 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 05:20:31.143912 kubelet[2736]: I0904 05:20:31.143866 2736 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 4 05:20:31.143912 kubelet[2736]: I0904 05:20:31.143888 2736 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 05:20:31.143912 kubelet[2736]: I0904 05:20:31.143894 2736 kubelet.go:2382] "Starting kubelet main sync loop" Sep 4 05:20:31.144156 kubelet[2736]: E0904 05:20:31.143944 2736 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 05:20:31.238687 kubelet[2736]: I0904 05:20:31.238626 2736 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 05:20:31.238687 kubelet[2736]: I0904 05:20:31.238653 2736 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 05:20:31.238687 kubelet[2736]: I0904 05:20:31.238694 2736 state_mem.go:36] "Initialized new in-memory state store" Sep 4 05:20:31.238918 kubelet[2736]: I0904 05:20:31.238895 2736 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 05:20:31.238918 kubelet[2736]: I0904 05:20:31.238906 2736 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 05:20:31.239012 kubelet[2736]: I0904 05:20:31.238926 2736 policy_none.go:49] "None policy: Start" Sep 4 05:20:31.239012 kubelet[2736]: I0904 05:20:31.238938 2736 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 05:20:31.239012 kubelet[2736]: I0904 05:20:31.238948 2736 state_mem.go:35] "Initializing new in-memory state store" Sep 4 05:20:31.239097 kubelet[2736]: I0904 05:20:31.239043 2736 state_mem.go:75] "Updated machine memory state" Sep 4 05:20:31.244245 kubelet[2736]: E0904 05:20:31.244177 2736 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 05:20:31.244346 kubelet[2736]: I0904 05:20:31.244318 2736 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 05:20:31.244534 kubelet[2736]: I0904 05:20:31.244513 2736 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 05:20:31.244564 kubelet[2736]: I0904 05:20:31.244528 2736 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 05:20:31.245183 kubelet[2736]: I0904 05:20:31.245085 2736 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 05:20:31.248684 kubelet[2736]: E0904 05:20:31.248640 2736 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 05:20:31.352872 kubelet[2736]: I0904 05:20:31.352744 2736 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 05:20:31.445704 kubelet[2736]: I0904 05:20:31.445654 2736 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:31.446155 kubelet[2736]: I0904 05:20:31.446133 2736 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:31.446200 kubelet[2736]: I0904 05:20:31.446166 2736 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 05:20:31.508262 sudo[2773]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 05:20:31.508697 sudo[2773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 05:20:31.524391 kubelet[2736]: I0904 05:20:31.524332 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4351e0f26db1ede4b566b3d7ffc6b26-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b4351e0f26db1ede4b566b3d7ffc6b26\") " pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:31.524391 kubelet[2736]: I0904 05:20:31.524390 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:31.524588 kubelet[2736]: I0904 05:20:31.524421 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:31.524588 kubelet[2736]: I0904 05:20:31.524454 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:31.524588 kubelet[2736]: I0904 05:20:31.524476 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:31.524588 kubelet[2736]: I0904 05:20:31.524496 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 4 05:20:31.524588 kubelet[2736]: I0904 05:20:31.524530 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4351e0f26db1ede4b566b3d7ffc6b26-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4351e0f26db1ede4b566b3d7ffc6b26\") " pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:31.524761 kubelet[2736]: I0904 05:20:31.524568 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4351e0f26db1ede4b566b3d7ffc6b26-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4351e0f26db1ede4b566b3d7ffc6b26\") " pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:31.524761 kubelet[2736]: I0904 05:20:31.524593 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 05:20:31.609291 kubelet[2736]: I0904 05:20:31.609125 2736 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 05:20:31.609291 kubelet[2736]: I0904 05:20:31.609250 2736 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 05:20:32.101069 sudo[2773]: pam_unix(sudo:session): session closed for user root Sep 4 05:20:32.109831 kubelet[2736]: I0904 05:20:32.109779 2736 apiserver.go:52] "Watching apiserver" Sep 4 05:20:32.120342 kubelet[2736]: I0904 05:20:32.120302 2736 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 05:20:32.217898 kubelet[2736]: I0904 05:20:32.217858 2736 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:32.451790 kubelet[2736]: E0904 05:20:32.451306 2736 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 05:20:32.471485 kubelet[2736]: I0904 05:20:32.471408 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.471340515 podStartE2EDuration="1.471340515s" podCreationTimestamp="2025-09-04 05:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 05:20:32.470195977 +0000 UTC m=+1.430517582" watchObservedRunningTime="2025-09-04 05:20:32.471340515 +0000 UTC m=+1.431662120" Sep 4 05:20:32.488941 kubelet[2736]: I0904 05:20:32.488830 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.488802738 podStartE2EDuration="1.488802738s" podCreationTimestamp="2025-09-04 05:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 05:20:32.47888224 +0000 UTC m=+1.439203865" watchObservedRunningTime="2025-09-04 05:20:32.488802738 +0000 UTC m=+1.449124343" Sep 4 05:20:32.498779 kubelet[2736]: I0904 05:20:32.498716 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.498702007 podStartE2EDuration="1.498702007s" podCreationTimestamp="2025-09-04 05:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 05:20:32.489127046 +0000 UTC m=+1.449448651" watchObservedRunningTime="2025-09-04 05:20:32.498702007 +0000 UTC m=+1.459023602" Sep 4 05:20:33.762730 sudo[1801]: pam_unix(sudo:session): session closed for user root Sep 4 05:20:33.764237 sshd[1800]: Connection closed by 10.0.0.1 port 52018 Sep 4 05:20:33.764986 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Sep 4 05:20:33.768718 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:52018.service: Deactivated successfully. Sep 4 05:20:33.771169 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 05:20:33.771448 systemd[1]: session-7.scope: Consumed 5.160s CPU time, 263.8M memory peak. Sep 4 05:20:33.773353 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Sep 4 05:20:33.774771 systemd-logind[1582]: Removed session 7. Sep 4 05:20:34.609250 update_engine[1586]: I20250904 05:20:34.609138 1586 update_attempter.cc:509] Updating boot flags... Sep 4 05:20:34.975918 kubelet[2736]: I0904 05:20:34.975835 2736 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 05:20:34.976554 containerd[1593]: time="2025-09-04T05:20:34.976487379Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 05:20:34.976937 kubelet[2736]: I0904 05:20:34.976900 2736 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 05:20:35.921671 systemd[1]: Created slice kubepods-besteffort-poddbfea143_ab61_417b_8a27_654e09148edd.slice - libcontainer container kubepods-besteffort-poddbfea143_ab61_417b_8a27_654e09148edd.slice. Sep 4 05:20:35.941614 systemd[1]: Created slice kubepods-burstable-pod82744d0c_dd68_43e3_9f7d_fa17285199ee.slice - libcontainer container kubepods-burstable-pod82744d0c_dd68_43e3_9f7d_fa17285199ee.slice. Sep 4 05:20:36.020791 kubelet[2736]: I0904 05:20:36.020742 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-etc-cni-netd\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.020791 kubelet[2736]: I0904 05:20:36.020776 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8rdc\" (UniqueName: \"kubernetes.io/projected/82744d0c-dd68-43e3-9f7d-fa17285199ee-kube-api-access-b8rdc\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021299 kubelet[2736]: I0904 05:20:36.020807 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82744d0c-dd68-43e3-9f7d-fa17285199ee-clustermesh-secrets\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021299 kubelet[2736]: I0904 05:20:36.020823 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dbfea143-ab61-417b-8a27-654e09148edd-kube-proxy\") pod \"kube-proxy-z5q9l\" (UID: \"dbfea143-ab61-417b-8a27-654e09148edd\") " pod="kube-system/kube-proxy-z5q9l" Sep 4 05:20:36.021299 kubelet[2736]: I0904 05:20:36.020837 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2ckh\" (UniqueName: \"kubernetes.io/projected/dbfea143-ab61-417b-8a27-654e09148edd-kube-api-access-f2ckh\") pod \"kube-proxy-z5q9l\" (UID: \"dbfea143-ab61-417b-8a27-654e09148edd\") " pod="kube-system/kube-proxy-z5q9l" Sep 4 05:20:36.021299 kubelet[2736]: I0904 05:20:36.020851 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-hostproc\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021299 kubelet[2736]: I0904 05:20:36.020865 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-cgroup\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021299 kubelet[2736]: I0904 05:20:36.020878 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-xtables-lock\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021568 kubelet[2736]: I0904 05:20:36.020892 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-bpf-maps\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021568 kubelet[2736]: I0904 05:20:36.020906 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cni-path\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021568 kubelet[2736]: I0904 05:20:36.020919 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-host-proc-sys-net\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021568 kubelet[2736]: I0904 05:20:36.020932 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-lib-modules\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021568 kubelet[2736]: I0904 05:20:36.020944 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-config-path\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021568 kubelet[2736]: I0904 05:20:36.020958 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82744d0c-dd68-43e3-9f7d-fa17285199ee-hubble-tls\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021774 kubelet[2736]: I0904 05:20:36.020983 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-host-proc-sys-kernel\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.021774 kubelet[2736]: I0904 05:20:36.021001 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbfea143-ab61-417b-8a27-654e09148edd-xtables-lock\") pod \"kube-proxy-z5q9l\" (UID: \"dbfea143-ab61-417b-8a27-654e09148edd\") " pod="kube-system/kube-proxy-z5q9l" Sep 4 05:20:36.021774 kubelet[2736]: I0904 05:20:36.021022 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbfea143-ab61-417b-8a27-654e09148edd-lib-modules\") pod \"kube-proxy-z5q9l\" (UID: \"dbfea143-ab61-417b-8a27-654e09148edd\") " pod="kube-system/kube-proxy-z5q9l" Sep 4 05:20:36.021774 kubelet[2736]: I0904 05:20:36.021038 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-run\") pod \"cilium-b2zfq\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " pod="kube-system/cilium-b2zfq" Sep 4 05:20:36.022015 systemd[1]: Created slice kubepods-besteffort-pode73116b9_cf2a_4a5a_8640_1a0f1b531df6.slice - libcontainer container kubepods-besteffort-pode73116b9_cf2a_4a5a_8640_1a0f1b531df6.slice. Sep 4 05:20:36.121458 kubelet[2736]: I0904 05:20:36.121402 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp9zq\" (UniqueName: \"kubernetes.io/projected/e73116b9-cf2a-4a5a-8640-1a0f1b531df6-kube-api-access-sp9zq\") pod \"cilium-operator-6c4d7847fc-4cnkh\" (UID: \"e73116b9-cf2a-4a5a-8640-1a0f1b531df6\") " pod="kube-system/cilium-operator-6c4d7847fc-4cnkh" Sep 4 05:20:36.122807 kubelet[2736]: I0904 05:20:36.122749 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e73116b9-cf2a-4a5a-8640-1a0f1b531df6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4cnkh\" (UID: \"e73116b9-cf2a-4a5a-8640-1a0f1b531df6\") " pod="kube-system/cilium-operator-6c4d7847fc-4cnkh" Sep 4 05:20:36.238355 containerd[1593]: time="2025-09-04T05:20:36.238220131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z5q9l,Uid:dbfea143-ab61-417b-8a27-654e09148edd,Namespace:kube-system,Attempt:0,}" Sep 4 05:20:36.245197 containerd[1593]: time="2025-09-04T05:20:36.245159558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b2zfq,Uid:82744d0c-dd68-43e3-9f7d-fa17285199ee,Namespace:kube-system,Attempt:0,}" Sep 4 05:20:36.327672 containerd[1593]: time="2025-09-04T05:20:36.327598116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4cnkh,Uid:e73116b9-cf2a-4a5a-8640-1a0f1b531df6,Namespace:kube-system,Attempt:0,}" Sep 4 05:20:36.463101 containerd[1593]: time="2025-09-04T05:20:36.462957585Z" level=info msg="connecting to shim fb811798c3310e0bd505a159f049ed7567f888f29809c8c4156cccccf5f32f5a" address="unix:///run/containerd/s/4d59ee138d1c2590759a91ab66466bd80f3ec188358fc0e08566af294580e28d" namespace=k8s.io protocol=ttrpc version=3 Sep 4 05:20:36.471633 containerd[1593]: time="2025-09-04T05:20:36.471553173Z" level=info msg="connecting to shim 0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13" address="unix:///run/containerd/s/daa5a159d6f0a3d75aac3393dee337596272e1f0846f8bebb69c5e0aa7966440" namespace=k8s.io protocol=ttrpc version=3 Sep 4 05:20:36.480402 containerd[1593]: time="2025-09-04T05:20:36.480056746Z" level=info msg="connecting to shim 3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7" address="unix:///run/containerd/s/cf03dd3dc826ff96587426b9db543d9dd87cb6738428f7df2bc71d023edd31e0" namespace=k8s.io protocol=ttrpc version=3 Sep 4 05:20:36.501553 systemd[1]: Started cri-containerd-0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13.scope - libcontainer container 0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13. Sep 4 05:20:36.509889 systemd[1]: Started cri-containerd-3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7.scope - libcontainer container 3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7. Sep 4 05:20:36.513078 systemd[1]: Started cri-containerd-fb811798c3310e0bd505a159f049ed7567f888f29809c8c4156cccccf5f32f5a.scope - libcontainer container fb811798c3310e0bd505a159f049ed7567f888f29809c8c4156cccccf5f32f5a. Sep 4 05:20:36.551523 containerd[1593]: time="2025-09-04T05:20:36.551467615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b2zfq,Uid:82744d0c-dd68-43e3-9f7d-fa17285199ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\"" Sep 4 05:20:36.556399 containerd[1593]: time="2025-09-04T05:20:36.555549565Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 05:20:36.563500 containerd[1593]: time="2025-09-04T05:20:36.563433764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z5q9l,Uid:dbfea143-ab61-417b-8a27-654e09148edd,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb811798c3310e0bd505a159f049ed7567f888f29809c8c4156cccccf5f32f5a\"" Sep 4 05:20:36.567280 containerd[1593]: time="2025-09-04T05:20:36.567244719Z" level=info msg="CreateContainer within sandbox \"fb811798c3310e0bd505a159f049ed7567f888f29809c8c4156cccccf5f32f5a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 05:20:36.581541 containerd[1593]: time="2025-09-04T05:20:36.581484800Z" level=info msg="Container a248ba00afa979a3c1f09b0904f71c823109933feaf9e6baed20865c39a9e2e0: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:20:36.582753 containerd[1593]: time="2025-09-04T05:20:36.582706315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4cnkh,Uid:e73116b9-cf2a-4a5a-8640-1a0f1b531df6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7\"" Sep 4 05:20:36.590192 containerd[1593]: time="2025-09-04T05:20:36.590144739Z" level=info msg="CreateContainer within sandbox \"fb811798c3310e0bd505a159f049ed7567f888f29809c8c4156cccccf5f32f5a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a248ba00afa979a3c1f09b0904f71c823109933feaf9e6baed20865c39a9e2e0\"" Sep 4 05:20:36.593435 containerd[1593]: time="2025-09-04T05:20:36.593353162Z" level=info msg="StartContainer for \"a248ba00afa979a3c1f09b0904f71c823109933feaf9e6baed20865c39a9e2e0\"" Sep 4 05:20:36.595580 containerd[1593]: time="2025-09-04T05:20:36.595537184Z" level=info msg="connecting to shim a248ba00afa979a3c1f09b0904f71c823109933feaf9e6baed20865c39a9e2e0" address="unix:///run/containerd/s/4d59ee138d1c2590759a91ab66466bd80f3ec188358fc0e08566af294580e28d" protocol=ttrpc version=3 Sep 4 05:20:36.622542 systemd[1]: Started cri-containerd-a248ba00afa979a3c1f09b0904f71c823109933feaf9e6baed20865c39a9e2e0.scope - libcontainer container a248ba00afa979a3c1f09b0904f71c823109933feaf9e6baed20865c39a9e2e0. Sep 4 05:20:36.668818 containerd[1593]: time="2025-09-04T05:20:36.668753756Z" level=info msg="StartContainer for \"a248ba00afa979a3c1f09b0904f71c823109933feaf9e6baed20865c39a9e2e0\" returns successfully" Sep 4 05:20:37.294648 kubelet[2736]: I0904 05:20:37.294573 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z5q9l" podStartSLOduration=2.294532945 podStartE2EDuration="2.294532945s" podCreationTimestamp="2025-09-04 05:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 05:20:37.294360017 +0000 UTC m=+6.254681643" watchObservedRunningTime="2025-09-04 05:20:37.294532945 +0000 UTC m=+6.254854550" Sep 4 05:20:45.701388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352453489.mount: Deactivated successfully. Sep 4 05:20:51.424726 containerd[1593]: time="2025-09-04T05:20:51.424607888Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:51.425706 containerd[1593]: time="2025-09-04T05:20:51.425671521Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 05:20:51.427226 containerd[1593]: time="2025-09-04T05:20:51.427139405Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:20:51.428833 containerd[1593]: time="2025-09-04T05:20:51.428783141Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.873188942s" Sep 4 05:20:51.428833 containerd[1593]: time="2025-09-04T05:20:51.428833346Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 05:20:51.430361 containerd[1593]: time="2025-09-04T05:20:51.430037504Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 05:20:51.431520 containerd[1593]: time="2025-09-04T05:20:51.431468028Z" level=info msg="CreateContainer within sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 05:20:51.441241 containerd[1593]: time="2025-09-04T05:20:51.441185971Z" level=info msg="Container ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:20:51.445568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4156703620.mount: Deactivated successfully. Sep 4 05:20:51.452462 containerd[1593]: time="2025-09-04T05:20:51.452407637Z" level=info msg="CreateContainer within sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\"" Sep 4 05:20:51.453166 containerd[1593]: time="2025-09-04T05:20:51.453122172Z" level=info msg="StartContainer for \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\"" Sep 4 05:20:51.455391 containerd[1593]: time="2025-09-04T05:20:51.455350209Z" level=info msg="connecting to shim ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f" address="unix:///run/containerd/s/daa5a159d6f0a3d75aac3393dee337596272e1f0846f8bebb69c5e0aa7966440" protocol=ttrpc version=3 Sep 4 05:20:51.526552 systemd[1]: Started cri-containerd-ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f.scope - libcontainer container ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f. Sep 4 05:20:51.564894 containerd[1593]: time="2025-09-04T05:20:51.564841399Z" level=info msg="StartContainer for \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\" returns successfully" Sep 4 05:20:51.577954 systemd[1]: cri-containerd-ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f.scope: Deactivated successfully. Sep 4 05:20:51.580903 containerd[1593]: time="2025-09-04T05:20:51.580855070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\" id:\"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\" pid:3178 exited_at:{seconds:1756963251 nanos:580270359}" Sep 4 05:20:51.581024 containerd[1593]: time="2025-09-04T05:20:51.580967582Z" level=info msg="received exit event container_id:\"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\" id:\"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\" pid:3178 exited_at:{seconds:1756963251 nanos:580270359}" Sep 4 05:20:51.606767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f-rootfs.mount: Deactivated successfully. Sep 4 05:20:53.278871 containerd[1593]: time="2025-09-04T05:20:53.278813922Z" level=info msg="CreateContainer within sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 05:20:53.735841 containerd[1593]: time="2025-09-04T05:20:53.735784692Z" level=info msg="Container 1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:20:55.109234 containerd[1593]: time="2025-09-04T05:20:55.109174351Z" level=info msg="CreateContainer within sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\"" Sep 4 05:20:55.109789 containerd[1593]: time="2025-09-04T05:20:55.109768310Z" level=info msg="StartContainer for \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\"" Sep 4 05:20:55.110569 containerd[1593]: time="2025-09-04T05:20:55.110544319Z" level=info msg="connecting to shim 1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998" address="unix:///run/containerd/s/daa5a159d6f0a3d75aac3393dee337596272e1f0846f8bebb69c5e0aa7966440" protocol=ttrpc version=3 Sep 4 05:20:55.132550 systemd[1]: Started cri-containerd-1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998.scope - libcontainer container 1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998. Sep 4 05:20:55.748917 containerd[1593]: time="2025-09-04T05:20:55.748875827Z" level=info msg="StartContainer for \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\" returns successfully" Sep 4 05:20:55.757161 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 05:20:55.757423 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 05:20:55.759823 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 05:20:55.761751 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 05:20:55.763123 containerd[1593]: time="2025-09-04T05:20:55.763085615Z" level=info msg="received exit event container_id:\"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\" id:\"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\" pid:3223 exited_at:{seconds:1756963255 nanos:762848719}" Sep 4 05:20:55.763225 containerd[1593]: time="2025-09-04T05:20:55.763193939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\" id:\"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\" pid:3223 exited_at:{seconds:1756963255 nanos:762848719}" Sep 4 05:20:55.764274 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 05:20:55.765086 systemd[1]: cri-containerd-1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998.scope: Deactivated successfully. Sep 4 05:20:55.788030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998-rootfs.mount: Deactivated successfully. Sep 4 05:20:55.796221 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 05:20:56.762148 containerd[1593]: time="2025-09-04T05:20:56.762082906Z" level=info msg="CreateContainer within sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 05:20:57.100490 containerd[1593]: time="2025-09-04T05:20:57.098634160Z" level=info msg="Container 90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:20:57.239348 containerd[1593]: time="2025-09-04T05:20:57.239282884Z" level=info msg="CreateContainer within sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\"" Sep 4 05:20:57.240129 containerd[1593]: time="2025-09-04T05:20:57.240099220Z" level=info msg="StartContainer for \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\"" Sep 4 05:20:57.241502 containerd[1593]: time="2025-09-04T05:20:57.241473806Z" level=info msg="connecting to shim 90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88" address="unix:///run/containerd/s/daa5a159d6f0a3d75aac3393dee337596272e1f0846f8bebb69c5e0aa7966440" protocol=ttrpc version=3 Sep 4 05:20:57.262551 systemd[1]: Started cri-containerd-90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88.scope - libcontainer container 90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88. Sep 4 05:20:57.313935 systemd[1]: cri-containerd-90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88.scope: Deactivated successfully. Sep 4 05:20:57.315650 containerd[1593]: time="2025-09-04T05:20:57.315615222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\" id:\"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\" pid:3271 exited_at:{seconds:1756963257 nanos:314499483}" Sep 4 05:20:57.316890 containerd[1593]: time="2025-09-04T05:20:57.316663935Z" level=info msg="received exit event container_id:\"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\" id:\"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\" pid:3271 exited_at:{seconds:1756963257 nanos:314499483}" Sep 4 05:20:57.335737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104019954.mount: Deactivated successfully. Sep 4 05:20:57.346284 containerd[1593]: time="2025-09-04T05:20:57.346237231Z" level=info msg="StartContainer for \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\" returns successfully" Sep 4 05:20:57.767288 containerd[1593]: time="2025-09-04T05:20:57.766718630Z" level=info msg="CreateContainer within sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 05:20:57.779106 containerd[1593]: time="2025-09-04T05:20:57.779032362Z" level=info msg="Container d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:20:57.789916 containerd[1593]: time="2025-09-04T05:20:57.789861772Z" level=info msg="CreateContainer within sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\"" Sep 4 05:20:57.791598 containerd[1593]: time="2025-09-04T05:20:57.791561298Z" level=info msg="StartContainer for \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\"" Sep 4 05:20:57.792680 containerd[1593]: time="2025-09-04T05:20:57.792648282Z" level=info msg="connecting to shim d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5" address="unix:///run/containerd/s/daa5a159d6f0a3d75aac3393dee337596272e1f0846f8bebb69c5e0aa7966440" protocol=ttrpc version=3 Sep 4 05:20:57.824572 systemd[1]: Started cri-containerd-d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5.scope - libcontainer container d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5. Sep 4 05:20:57.868651 systemd[1]: cri-containerd-d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5.scope: Deactivated successfully. Sep 4 05:20:57.870209 containerd[1593]: time="2025-09-04T05:20:57.869297156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\" id:\"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\" pid:3325 exited_at:{seconds:1756963257 nanos:868799921}" Sep 4 05:20:57.931564 containerd[1593]: time="2025-09-04T05:20:57.931478947Z" level=info msg="received exit event container_id:\"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\" id:\"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\" pid:3325 exited_at:{seconds:1756963257 nanos:868799921}" Sep 4 05:20:57.940004 containerd[1593]: time="2025-09-04T05:20:57.939902331Z" level=info msg="StartContainer for \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\" returns successfully" Sep 4 05:20:58.098864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88-rootfs.mount: Deactivated successfully. Sep 4 05:20:58.770035 containerd[1593]: time="2025-09-04T05:20:58.769989370Z" level=info msg="CreateContainer within sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 05:20:59.453399 containerd[1593]: time="2025-09-04T05:20:59.452726287Z" level=info msg="Container 708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:20:59.456488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098163838.mount: Deactivated successfully. Sep 4 05:21:00.193209 containerd[1593]: time="2025-09-04T05:21:00.193149678Z" level=info msg="CreateContainer within sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\"" Sep 4 05:21:00.193803 containerd[1593]: time="2025-09-04T05:21:00.193776537Z" level=info msg="StartContainer for \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\"" Sep 4 05:21:00.195003 containerd[1593]: time="2025-09-04T05:21:00.194977715Z" level=info msg="connecting to shim 708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0" address="unix:///run/containerd/s/daa5a159d6f0a3d75aac3393dee337596272e1f0846f8bebb69c5e0aa7966440" protocol=ttrpc version=3 Sep 4 05:21:00.220240 containerd[1593]: time="2025-09-04T05:21:00.220188211Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:21:00.221518 systemd[1]: Started cri-containerd-708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0.scope - libcontainer container 708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0. Sep 4 05:21:00.244525 containerd[1593]: time="2025-09-04T05:21:00.244453741Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 05:21:00.294671 containerd[1593]: time="2025-09-04T05:21:00.294018485Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 05:21:00.296334 containerd[1593]: time="2025-09-04T05:21:00.296296027Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.866214471s" Sep 4 05:21:00.296520 containerd[1593]: time="2025-09-04T05:21:00.296468752Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 05:21:00.299255 containerd[1593]: time="2025-09-04T05:21:00.299222889Z" level=info msg="CreateContainer within sandbox \"3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 05:21:00.313417 containerd[1593]: time="2025-09-04T05:21:00.313364497Z" level=info msg="StartContainer for \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" returns successfully" Sep 4 05:21:00.375634 containerd[1593]: time="2025-09-04T05:21:00.375578792Z" level=info msg="Container b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:21:00.386488 containerd[1593]: time="2025-09-04T05:21:00.386442085Z" level=info msg="CreateContainer within sandbox \"3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\"" Sep 4 05:21:00.387823 containerd[1593]: time="2025-09-04T05:21:00.387795229Z" level=info msg="StartContainer for \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\"" Sep 4 05:21:00.389357 containerd[1593]: time="2025-09-04T05:21:00.389320437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" id:\"aa395b23e0867e0bffd042b0d8118e5bb43e1617cf8c33664b70a0143d3e1b38\" pid:3399 exited_at:{seconds:1756963260 nanos:389030591}" Sep 4 05:21:00.390254 containerd[1593]: time="2025-09-04T05:21:00.390142943Z" level=info msg="connecting to shim b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644" address="unix:///run/containerd/s/cf03dd3dc826ff96587426b9db543d9dd87cb6738428f7df2bc71d023edd31e0" protocol=ttrpc version=3 Sep 4 05:21:00.421147 kubelet[2736]: I0904 05:21:00.421107 2736 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 05:21:00.422685 systemd[1]: Started cri-containerd-b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644.scope - libcontainer container b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644. Sep 4 05:21:00.466006 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:40476.service - OpenSSH per-connection server daemon (10.0.0.1:40476). Sep 4 05:21:00.477716 kubelet[2736]: I0904 05:21:00.477674 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6rlj\" (UniqueName: \"kubernetes.io/projected/b5387134-9b9c-40e4-b72b-3a46ce7a5c18-kube-api-access-k6rlj\") pod \"coredns-668d6bf9bc-nwnzr\" (UID: \"b5387134-9b9c-40e4-b72b-3a46ce7a5c18\") " pod="kube-system/coredns-668d6bf9bc-nwnzr" Sep 4 05:21:00.477716 kubelet[2736]: I0904 05:21:00.477716 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27dda3ec-0a03-4f8f-a31c-287659236b13-config-volume\") pod \"coredns-668d6bf9bc-lcbbz\" (UID: \"27dda3ec-0a03-4f8f-a31c-287659236b13\") " pod="kube-system/coredns-668d6bf9bc-lcbbz" Sep 4 05:21:00.477956 kubelet[2736]: I0904 05:21:00.477746 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn75b\" (UniqueName: \"kubernetes.io/projected/27dda3ec-0a03-4f8f-a31c-287659236b13-kube-api-access-gn75b\") pod \"coredns-668d6bf9bc-lcbbz\" (UID: \"27dda3ec-0a03-4f8f-a31c-287659236b13\") " pod="kube-system/coredns-668d6bf9bc-lcbbz" Sep 4 05:21:00.477956 kubelet[2736]: I0904 05:21:00.477770 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5387134-9b9c-40e4-b72b-3a46ce7a5c18-config-volume\") pod \"coredns-668d6bf9bc-nwnzr\" (UID: \"b5387134-9b9c-40e4-b72b-3a46ce7a5c18\") " pod="kube-system/coredns-668d6bf9bc-nwnzr" Sep 4 05:21:00.492896 systemd[1]: Created slice kubepods-burstable-podb5387134_9b9c_40e4_b72b_3a46ce7a5c18.slice - libcontainer container kubepods-burstable-podb5387134_9b9c_40e4_b72b_3a46ce7a5c18.slice. Sep 4 05:21:00.499841 systemd[1]: Created slice kubepods-burstable-pod27dda3ec_0a03_4f8f_a31c_287659236b13.slice - libcontainer container kubepods-burstable-pod27dda3ec_0a03_4f8f_a31c_287659236b13.slice. Sep 4 05:21:00.623152 containerd[1593]: time="2025-09-04T05:21:00.622988711Z" level=info msg="StartContainer for \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" returns successfully" Sep 4 05:21:00.651200 sshd[3445]: Accepted publickey for core from 10.0.0.1 port 40476 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:00.653095 sshd-session[3445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:00.676207 systemd-logind[1582]: New session 8 of user core. Sep 4 05:21:00.680553 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 05:21:00.842001 containerd[1593]: time="2025-09-04T05:21:00.841871388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nwnzr,Uid:b5387134-9b9c-40e4-b72b-3a46ce7a5c18,Namespace:kube-system,Attempt:0,}" Sep 4 05:21:00.842532 containerd[1593]: time="2025-09-04T05:21:00.842365878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lcbbz,Uid:27dda3ec-0a03-4f8f-a31c-287659236b13,Namespace:kube-system,Attempt:0,}" Sep 4 05:21:00.995997 sshd[3464]: Connection closed by 10.0.0.1 port 40476 Sep 4 05:21:00.996878 sshd-session[3445]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:01.007927 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:40476.service: Deactivated successfully. Sep 4 05:21:01.014043 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 05:21:01.018476 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Sep 4 05:21:01.021971 systemd-logind[1582]: Removed session 8. Sep 4 05:21:01.105063 kubelet[2736]: I0904 05:21:01.104871 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4cnkh" podStartSLOduration=2.389544522 podStartE2EDuration="26.103154974s" podCreationTimestamp="2025-09-04 05:20:35 +0000 UTC" firstStartedPulling="2025-09-04 05:20:36.583881283 +0000 UTC m=+5.544202888" lastFinishedPulling="2025-09-04 05:21:00.297491735 +0000 UTC m=+29.257813340" observedRunningTime="2025-09-04 05:21:01.006590856 +0000 UTC m=+29.966912461" watchObservedRunningTime="2025-09-04 05:21:01.103154974 +0000 UTC m=+30.063476579" Sep 4 05:21:04.750821 systemd-networkd[1492]: cilium_host: Link UP Sep 4 05:21:04.750978 systemd-networkd[1492]: cilium_net: Link UP Sep 4 05:21:04.751155 systemd-networkd[1492]: cilium_net: Gained carrier Sep 4 05:21:04.751348 systemd-networkd[1492]: cilium_host: Gained carrier Sep 4 05:21:04.855806 systemd-networkd[1492]: cilium_vxlan: Link UP Sep 4 05:21:04.855818 systemd-networkd[1492]: cilium_vxlan: Gained carrier Sep 4 05:21:04.888522 systemd-networkd[1492]: cilium_host: Gained IPv6LL Sep 4 05:21:05.075417 kernel: NET: Registered PF_ALG protocol family Sep 4 05:21:05.719575 systemd-networkd[1492]: cilium_net: Gained IPv6LL Sep 4 05:21:05.763157 systemd-networkd[1492]: lxc_health: Link UP Sep 4 05:21:05.763701 systemd-networkd[1492]: lxc_health: Gained carrier Sep 4 05:21:06.009603 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:40478.service - OpenSSH per-connection server daemon (10.0.0.1:40478). Sep 4 05:21:06.032098 systemd-networkd[1492]: lxc55031fbbffbc: Link UP Sep 4 05:21:06.040427 kernel: eth0: renamed from tmpdf8a7 Sep 4 05:21:06.042343 systemd-networkd[1492]: lxc55031fbbffbc: Gained carrier Sep 4 05:21:06.069733 systemd-networkd[1492]: lxc94f6cc181199: Link UP Sep 4 05:21:06.076406 kernel: eth0: renamed from tmpa9532 Sep 4 05:21:06.079277 systemd-networkd[1492]: lxc94f6cc181199: Gained carrier Sep 4 05:21:06.081222 sshd[3904]: Accepted publickey for core from 10.0.0.1 port 40478 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:06.083151 sshd-session[3904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:06.097420 systemd-logind[1582]: New session 9 of user core. Sep 4 05:21:06.105504 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 05:21:06.231574 systemd-networkd[1492]: cilium_vxlan: Gained IPv6LL Sep 4 05:21:06.243408 sshd[3920]: Connection closed by 10.0.0.1 port 40478 Sep 4 05:21:06.245237 sshd-session[3904]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:06.252948 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:40478.service: Deactivated successfully. Sep 4 05:21:06.257752 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 05:21:06.260320 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Sep 4 05:21:06.262811 systemd-logind[1582]: Removed session 9. Sep 4 05:21:06.300410 kubelet[2736]: I0904 05:21:06.300194 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b2zfq" podStartSLOduration=16.423485399 podStartE2EDuration="31.300169073s" podCreationTimestamp="2025-09-04 05:20:35 +0000 UTC" firstStartedPulling="2025-09-04 05:20:36.553217483 +0000 UTC m=+5.513539088" lastFinishedPulling="2025-09-04 05:20:51.429901147 +0000 UTC m=+20.390222762" observedRunningTime="2025-09-04 05:21:01.099744493 +0000 UTC m=+30.060066088" watchObservedRunningTime="2025-09-04 05:21:06.300169073 +0000 UTC m=+35.260490668" Sep 4 05:21:07.255618 systemd-networkd[1492]: lxc94f6cc181199: Gained IPv6LL Sep 4 05:21:07.383671 systemd-networkd[1492]: lxc_health: Gained IPv6LL Sep 4 05:21:07.959647 systemd-networkd[1492]: lxc55031fbbffbc: Gained IPv6LL Sep 4 05:21:09.535930 containerd[1593]: time="2025-09-04T05:21:09.535824337Z" level=info msg="connecting to shim a953255951cc2036dbf9582886275c576b8ed25d34f03ac01c32fe00b7a53c2d" address="unix:///run/containerd/s/02f571f867fe543ea29860df1d9ff7a270b552ced3efd1e97eaeecbc4f749a19" namespace=k8s.io protocol=ttrpc version=3 Sep 4 05:21:09.538083 containerd[1593]: time="2025-09-04T05:21:09.538034017Z" level=info msg="connecting to shim df8a7ff7180dc98e8f7f53efb0ce97aac21e382a6478b0bf33b6ebf9f5400b98" address="unix:///run/containerd/s/5f5cd602979ba5b845cc330d6ea14704f3d10f1d991a5ba85c96551a487284fe" namespace=k8s.io protocol=ttrpc version=3 Sep 4 05:21:09.565504 systemd[1]: Started cri-containerd-a953255951cc2036dbf9582886275c576b8ed25d34f03ac01c32fe00b7a53c2d.scope - libcontainer container a953255951cc2036dbf9582886275c576b8ed25d34f03ac01c32fe00b7a53c2d. Sep 4 05:21:09.566909 systemd[1]: Started cri-containerd-df8a7ff7180dc98e8f7f53efb0ce97aac21e382a6478b0bf33b6ebf9f5400b98.scope - libcontainer container df8a7ff7180dc98e8f7f53efb0ce97aac21e382a6478b0bf33b6ebf9f5400b98. Sep 4 05:21:09.583350 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 05:21:09.583873 systemd-resolved[1404]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 05:21:09.616450 containerd[1593]: time="2025-09-04T05:21:09.616407002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lcbbz,Uid:27dda3ec-0a03-4f8f-a31c-287659236b13,Namespace:kube-system,Attempt:0,} returns sandbox id \"a953255951cc2036dbf9582886275c576b8ed25d34f03ac01c32fe00b7a53c2d\"" Sep 4 05:21:09.620970 containerd[1593]: time="2025-09-04T05:21:09.620916301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nwnzr,Uid:b5387134-9b9c-40e4-b72b-3a46ce7a5c18,Namespace:kube-system,Attempt:0,} returns sandbox id \"df8a7ff7180dc98e8f7f53efb0ce97aac21e382a6478b0bf33b6ebf9f5400b98\"" Sep 4 05:21:09.621267 containerd[1593]: time="2025-09-04T05:21:09.621220733Z" level=info msg="CreateContainer within sandbox \"a953255951cc2036dbf9582886275c576b8ed25d34f03ac01c32fe00b7a53c2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 05:21:09.625292 containerd[1593]: time="2025-09-04T05:21:09.624786921Z" level=info msg="CreateContainer within sandbox \"df8a7ff7180dc98e8f7f53efb0ce97aac21e382a6478b0bf33b6ebf9f5400b98\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 05:21:09.637106 containerd[1593]: time="2025-09-04T05:21:09.637050543Z" level=info msg="Container 97f63a6f3200e62c0eb3e610996a44b7fae18312df23a89989d5d906e2985f7c: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:21:09.643540 containerd[1593]: time="2025-09-04T05:21:09.643496800Z" level=info msg="CreateContainer within sandbox \"a953255951cc2036dbf9582886275c576b8ed25d34f03ac01c32fe00b7a53c2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97f63a6f3200e62c0eb3e610996a44b7fae18312df23a89989d5d906e2985f7c\"" Sep 4 05:21:09.644254 containerd[1593]: time="2025-09-04T05:21:09.644218586Z" level=info msg="StartContainer for \"97f63a6f3200e62c0eb3e610996a44b7fae18312df23a89989d5d906e2985f7c\"" Sep 4 05:21:09.645098 containerd[1593]: time="2025-09-04T05:21:09.645065026Z" level=info msg="connecting to shim 97f63a6f3200e62c0eb3e610996a44b7fae18312df23a89989d5d906e2985f7c" address="unix:///run/containerd/s/02f571f867fe543ea29860df1d9ff7a270b552ced3efd1e97eaeecbc4f749a19" protocol=ttrpc version=3 Sep 4 05:21:09.666522 systemd[1]: Started cri-containerd-97f63a6f3200e62c0eb3e610996a44b7fae18312df23a89989d5d906e2985f7c.scope - libcontainer container 97f63a6f3200e62c0eb3e610996a44b7fae18312df23a89989d5d906e2985f7c. Sep 4 05:21:09.667607 containerd[1593]: time="2025-09-04T05:21:09.667575223Z" level=info msg="Container fc64c3e6b721c94cf8eaf748b54e474106ff82e7c8e5f619f0a0485d1482fb3f: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:21:09.676464 containerd[1593]: time="2025-09-04T05:21:09.676432278Z" level=info msg="CreateContainer within sandbox \"df8a7ff7180dc98e8f7f53efb0ce97aac21e382a6478b0bf33b6ebf9f5400b98\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc64c3e6b721c94cf8eaf748b54e474106ff82e7c8e5f619f0a0485d1482fb3f\"" Sep 4 05:21:09.677393 containerd[1593]: time="2025-09-04T05:21:09.677300248Z" level=info msg="StartContainer for \"fc64c3e6b721c94cf8eaf748b54e474106ff82e7c8e5f619f0a0485d1482fb3f\"" Sep 4 05:21:09.693703 containerd[1593]: time="2025-09-04T05:21:09.693649704Z" level=info msg="connecting to shim fc64c3e6b721c94cf8eaf748b54e474106ff82e7c8e5f619f0a0485d1482fb3f" address="unix:///run/containerd/s/5f5cd602979ba5b845cc330d6ea14704f3d10f1d991a5ba85c96551a487284fe" protocol=ttrpc version=3 Sep 4 05:21:09.711737 containerd[1593]: time="2025-09-04T05:21:09.711692220Z" level=info msg="StartContainer for \"97f63a6f3200e62c0eb3e610996a44b7fae18312df23a89989d5d906e2985f7c\" returns successfully" Sep 4 05:21:09.739552 systemd[1]: Started cri-containerd-fc64c3e6b721c94cf8eaf748b54e474106ff82e7c8e5f619f0a0485d1482fb3f.scope - libcontainer container fc64c3e6b721c94cf8eaf748b54e474106ff82e7c8e5f619f0a0485d1482fb3f. Sep 4 05:21:09.776899 containerd[1593]: time="2025-09-04T05:21:09.776837863Z" level=info msg="StartContainer for \"fc64c3e6b721c94cf8eaf748b54e474106ff82e7c8e5f619f0a0485d1482fb3f\" returns successfully" Sep 4 05:21:09.866487 kubelet[2736]: I0904 05:21:09.866253 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lcbbz" podStartSLOduration=34.866224372 podStartE2EDuration="34.866224372s" podCreationTimestamp="2025-09-04 05:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 05:21:09.834284244 +0000 UTC m=+38.794605849" watchObservedRunningTime="2025-09-04 05:21:09.866224372 +0000 UTC m=+38.826545977" Sep 4 05:21:09.867394 kubelet[2736]: I0904 05:21:09.867320 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nwnzr" podStartSLOduration=33.86730953 podStartE2EDuration="33.86730953s" podCreationTimestamp="2025-09-04 05:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 05:21:09.866346873 +0000 UTC m=+38.826668468" watchObservedRunningTime="2025-09-04 05:21:09.86730953 +0000 UTC m=+38.827631135" Sep 4 05:21:11.258543 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:36436.service - OpenSSH per-connection server daemon (10.0.0.1:36436). Sep 4 05:21:11.334928 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 36436 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:11.336824 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:11.342981 systemd-logind[1582]: New session 10 of user core. Sep 4 05:21:11.351566 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 05:21:11.480171 sshd[4130]: Connection closed by 10.0.0.1 port 36436 Sep 4 05:21:11.480556 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:11.485599 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:36436.service: Deactivated successfully. Sep 4 05:21:11.487707 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 05:21:11.488720 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. Sep 4 05:21:11.490413 systemd-logind[1582]: Removed session 10. Sep 4 05:21:16.499341 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:36444.service - OpenSSH per-connection server daemon (10.0.0.1:36444). Sep 4 05:21:16.568127 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 36444 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:16.569695 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:16.574116 systemd-logind[1582]: New session 11 of user core. Sep 4 05:21:16.586530 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 05:21:16.700849 sshd[4147]: Connection closed by 10.0.0.1 port 36444 Sep 4 05:21:16.701199 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:16.714117 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:36444.service: Deactivated successfully. Sep 4 05:21:16.715900 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 05:21:16.716783 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. Sep 4 05:21:16.719552 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:36456.service - OpenSSH per-connection server daemon (10.0.0.1:36456). Sep 4 05:21:16.720478 systemd-logind[1582]: Removed session 11. Sep 4 05:21:16.785915 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 36456 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:16.787556 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:16.792587 systemd-logind[1582]: New session 12 of user core. Sep 4 05:21:16.799505 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 05:21:17.015101 sshd[4164]: Connection closed by 10.0.0.1 port 36456 Sep 4 05:21:17.015730 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:17.025174 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:36456.service: Deactivated successfully. Sep 4 05:21:17.027139 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 05:21:17.030603 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. Sep 4 05:21:17.035680 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:36466.service - OpenSSH per-connection server daemon (10.0.0.1:36466). Sep 4 05:21:17.037482 systemd-logind[1582]: Removed session 12. Sep 4 05:21:17.097336 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 36466 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:17.099141 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:17.103975 systemd-logind[1582]: New session 13 of user core. Sep 4 05:21:17.114685 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 05:21:17.300312 sshd[4178]: Connection closed by 10.0.0.1 port 36466 Sep 4 05:21:17.301567 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:17.305837 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:36466.service: Deactivated successfully. Sep 4 05:21:17.308073 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 05:21:17.308837 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. Sep 4 05:21:17.310036 systemd-logind[1582]: Removed session 13. Sep 4 05:21:22.318731 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:52718.service - OpenSSH per-connection server daemon (10.0.0.1:52718). Sep 4 05:21:22.388277 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 52718 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:22.389922 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:22.394752 systemd-logind[1582]: New session 14 of user core. Sep 4 05:21:22.404613 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 05:21:22.533195 sshd[4194]: Connection closed by 10.0.0.1 port 52718 Sep 4 05:21:22.533634 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:22.538294 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:52718.service: Deactivated successfully. Sep 4 05:21:22.540621 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 05:21:22.541623 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. Sep 4 05:21:22.543405 systemd-logind[1582]: Removed session 14. Sep 4 05:21:27.545216 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:52730.service - OpenSSH per-connection server daemon (10.0.0.1:52730). Sep 4 05:21:27.598228 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 52730 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:27.599502 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:27.603747 systemd-logind[1582]: New session 15 of user core. Sep 4 05:21:27.615524 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 05:21:27.724850 sshd[4211]: Connection closed by 10.0.0.1 port 52730 Sep 4 05:21:27.725183 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:27.741252 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:52730.service: Deactivated successfully. Sep 4 05:21:27.743337 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 05:21:27.744239 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. Sep 4 05:21:27.747214 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:52732.service - OpenSSH per-connection server daemon (10.0.0.1:52732). Sep 4 05:21:27.748072 systemd-logind[1582]: Removed session 15. Sep 4 05:21:27.801898 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 52732 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:27.803404 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:27.808408 systemd-logind[1582]: New session 16 of user core. Sep 4 05:21:27.818534 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 05:21:28.827572 sshd[4228]: Connection closed by 10.0.0.1 port 52732 Sep 4 05:21:28.828082 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:28.837251 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:52732.service: Deactivated successfully. Sep 4 05:21:28.839437 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 05:21:28.840353 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. Sep 4 05:21:28.843836 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:52740.service - OpenSSH per-connection server daemon (10.0.0.1:52740). Sep 4 05:21:28.844499 systemd-logind[1582]: Removed session 16. Sep 4 05:21:28.910814 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 52740 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:28.912166 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:28.916836 systemd-logind[1582]: New session 17 of user core. Sep 4 05:21:28.926514 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 05:21:30.054229 sshd[4243]: Connection closed by 10.0.0.1 port 52740 Sep 4 05:21:30.054851 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:30.070540 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:52740.service: Deactivated successfully. Sep 4 05:21:30.073061 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 05:21:30.076761 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. Sep 4 05:21:30.083538 systemd-logind[1582]: Removed session 17. Sep 4 05:21:30.086708 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:33648.service - OpenSSH per-connection server daemon (10.0.0.1:33648). Sep 4 05:21:30.149155 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 33648 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:30.150541 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:30.155114 systemd-logind[1582]: New session 18 of user core. Sep 4 05:21:30.165544 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 05:21:30.393724 sshd[4265]: Connection closed by 10.0.0.1 port 33648 Sep 4 05:21:30.394462 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:30.403260 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:33648.service: Deactivated successfully. Sep 4 05:21:30.405452 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 05:21:30.406524 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. Sep 4 05:21:30.409783 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:33658.service - OpenSSH per-connection server daemon (10.0.0.1:33658). Sep 4 05:21:30.410596 systemd-logind[1582]: Removed session 18. Sep 4 05:21:30.463642 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 33658 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:30.465092 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:30.470317 systemd-logind[1582]: New session 19 of user core. Sep 4 05:21:30.484507 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 05:21:30.594282 sshd[4280]: Connection closed by 10.0.0.1 port 33658 Sep 4 05:21:30.594695 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:30.599598 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:33658.service: Deactivated successfully. Sep 4 05:21:30.601885 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 05:21:30.602806 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. Sep 4 05:21:30.604355 systemd-logind[1582]: Removed session 19. Sep 4 05:21:35.606910 systemd[1]: Started sshd@19-10.0.0.60:22-10.0.0.1:33660.service - OpenSSH per-connection server daemon (10.0.0.1:33660). Sep 4 05:21:35.652824 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 33660 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:35.654267 sshd-session[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:35.658360 systemd-logind[1582]: New session 20 of user core. Sep 4 05:21:35.670528 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 05:21:35.777066 sshd[4298]: Connection closed by 10.0.0.1 port 33660 Sep 4 05:21:35.777460 sshd-session[4295]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:35.781606 systemd[1]: sshd@19-10.0.0.60:22-10.0.0.1:33660.service: Deactivated successfully. Sep 4 05:21:35.783540 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 05:21:35.784407 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. Sep 4 05:21:35.785624 systemd-logind[1582]: Removed session 20. Sep 4 05:21:40.798479 systemd[1]: Started sshd@20-10.0.0.60:22-10.0.0.1:54544.service - OpenSSH per-connection server daemon (10.0.0.1:54544). Sep 4 05:21:40.852287 sshd[4315]: Accepted publickey for core from 10.0.0.1 port 54544 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:40.854108 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:40.858363 systemd-logind[1582]: New session 21 of user core. Sep 4 05:21:40.868518 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 05:21:40.985014 sshd[4318]: Connection closed by 10.0.0.1 port 54544 Sep 4 05:21:40.985447 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:40.990820 systemd[1]: sshd@20-10.0.0.60:22-10.0.0.1:54544.service: Deactivated successfully. Sep 4 05:21:40.993426 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 05:21:40.994298 systemd-logind[1582]: Session 21 logged out. Waiting for processes to exit. Sep 4 05:21:40.996106 systemd-logind[1582]: Removed session 21. Sep 4 05:21:45.999918 systemd[1]: Started sshd@21-10.0.0.60:22-10.0.0.1:54550.service - OpenSSH per-connection server daemon (10.0.0.1:54550). Sep 4 05:21:46.064593 sshd[4332]: Accepted publickey for core from 10.0.0.1 port 54550 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:46.066235 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:46.071038 systemd-logind[1582]: New session 22 of user core. Sep 4 05:21:46.085499 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 05:21:46.281249 sshd[4335]: Connection closed by 10.0.0.1 port 54550 Sep 4 05:21:46.281543 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:46.287076 systemd[1]: sshd@21-10.0.0.60:22-10.0.0.1:54550.service: Deactivated successfully. Sep 4 05:21:46.289409 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 05:21:46.290368 systemd-logind[1582]: Session 22 logged out. Waiting for processes to exit. Sep 4 05:21:46.291740 systemd-logind[1582]: Removed session 22. Sep 4 05:21:51.295308 systemd[1]: Started sshd@22-10.0.0.60:22-10.0.0.1:59272.service - OpenSSH per-connection server daemon (10.0.0.1:59272). Sep 4 05:21:51.357268 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 59272 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:51.358655 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:51.363481 systemd-logind[1582]: New session 23 of user core. Sep 4 05:21:51.373510 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 05:21:51.483908 sshd[4351]: Connection closed by 10.0.0.1 port 59272 Sep 4 05:21:51.484262 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:51.492913 systemd[1]: sshd@22-10.0.0.60:22-10.0.0.1:59272.service: Deactivated successfully. Sep 4 05:21:51.494677 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 05:21:51.495393 systemd-logind[1582]: Session 23 logged out. Waiting for processes to exit. Sep 4 05:21:51.498278 systemd[1]: Started sshd@23-10.0.0.60:22-10.0.0.1:59286.service - OpenSSH per-connection server daemon (10.0.0.1:59286). Sep 4 05:21:51.498978 systemd-logind[1582]: Removed session 23. Sep 4 05:21:51.556402 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 59286 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:51.557645 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:51.561955 systemd-logind[1582]: New session 24 of user core. Sep 4 05:21:51.572523 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 05:21:52.901929 containerd[1593]: time="2025-09-04T05:21:52.901861526Z" level=info msg="StopContainer for \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" with timeout 30 (s)" Sep 4 05:21:52.922435 containerd[1593]: time="2025-09-04T05:21:52.922370371Z" level=info msg="Stop container \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" with signal terminated" Sep 4 05:21:52.938301 systemd[1]: cri-containerd-b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644.scope: Deactivated successfully. Sep 4 05:21:52.942207 containerd[1593]: time="2025-09-04T05:21:52.942163965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" id:\"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" pid:3437 exited_at:{seconds:1756963312 nanos:941283899}" Sep 4 05:21:52.942306 containerd[1593]: time="2025-09-04T05:21:52.942249798Z" level=info msg="received exit event container_id:\"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" id:\"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" pid:3437 exited_at:{seconds:1756963312 nanos:941283899}" Sep 4 05:21:52.953578 containerd[1593]: time="2025-09-04T05:21:52.953517401Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 05:21:52.954494 containerd[1593]: time="2025-09-04T05:21:52.954462641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" id:\"efa80313bc1046d590115f30d3c84f7c8ce3a020e8f1507e2de7c8ec2ca2c7f9\" pid:4393 exited_at:{seconds:1756963312 nanos:954106604}" Sep 4 05:21:52.958668 containerd[1593]: time="2025-09-04T05:21:52.958609470Z" level=info msg="StopContainer for \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" with timeout 2 (s)" Sep 4 05:21:52.958992 containerd[1593]: time="2025-09-04T05:21:52.958967493Z" level=info msg="Stop container \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" with signal terminated" Sep 4 05:21:52.968364 systemd-networkd[1492]: lxc_health: Link DOWN Sep 4 05:21:52.968492 systemd-networkd[1492]: lxc_health: Lost carrier Sep 4 05:21:52.970554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644-rootfs.mount: Deactivated successfully. Sep 4 05:21:52.990532 systemd[1]: cri-containerd-708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0.scope: Deactivated successfully. Sep 4 05:21:52.991963 containerd[1593]: time="2025-09-04T05:21:52.991072196Z" level=info msg="received exit event container_id:\"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" id:\"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" pid:3366 exited_at:{seconds:1756963312 nanos:990879911}" Sep 4 05:21:52.991963 containerd[1593]: time="2025-09-04T05:21:52.991293549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" id:\"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" pid:3366 exited_at:{seconds:1756963312 nanos:990879911}" Sep 4 05:21:52.990986 systemd[1]: cri-containerd-708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0.scope: Consumed 6.702s CPU time, 125.4M memory peak, 228K read from disk, 13.3M written to disk. Sep 4 05:21:52.994809 containerd[1593]: time="2025-09-04T05:21:52.994770521Z" level=info msg="StopContainer for \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" returns successfully" Sep 4 05:21:52.999627 containerd[1593]: time="2025-09-04T05:21:52.999596414Z" level=info msg="StopPodSandbox for \"3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7\"" Sep 4 05:21:52.999726 containerd[1593]: time="2025-09-04T05:21:52.999665966Z" level=info msg="Container to stop \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 05:21:53.007593 systemd[1]: cri-containerd-3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7.scope: Deactivated successfully. Sep 4 05:21:53.010091 containerd[1593]: time="2025-09-04T05:21:53.010043879Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7\" id:\"3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7\" pid:2942 exit_status:137 exited_at:{seconds:1756963313 nanos:8684570}" Sep 4 05:21:53.016083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0-rootfs.mount: Deactivated successfully. Sep 4 05:21:53.039872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7-rootfs.mount: Deactivated successfully. Sep 4 05:21:53.195925 containerd[1593]: time="2025-09-04T05:21:53.195803377Z" level=info msg="shim disconnected" id=3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7 namespace=k8s.io Sep 4 05:21:53.197478 containerd[1593]: time="2025-09-04T05:21:53.196185914Z" level=warning msg="cleaning up after shim disconnected" id=3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7 namespace=k8s.io Sep 4 05:21:53.229286 containerd[1593]: time="2025-09-04T05:21:53.196209930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 05:21:53.229483 containerd[1593]: time="2025-09-04T05:21:53.218885948Z" level=info msg="StopContainer for \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" returns successfully" Sep 4 05:21:53.230037 containerd[1593]: time="2025-09-04T05:21:53.229996915Z" level=info msg="StopPodSandbox for \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\"" Sep 4 05:21:53.230098 containerd[1593]: time="2025-09-04T05:21:53.230074282Z" level=info msg="Container to stop \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 05:21:53.230098 containerd[1593]: time="2025-09-04T05:21:53.230086054Z" level=info msg="Container to stop \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 05:21:53.230168 containerd[1593]: time="2025-09-04T05:21:53.230094580Z" level=info msg="Container to stop \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 05:21:53.230168 containerd[1593]: time="2025-09-04T05:21:53.230111904Z" level=info msg="Container to stop \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 05:21:53.230168 containerd[1593]: time="2025-09-04T05:21:53.230120750Z" level=info msg="Container to stop \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 05:21:53.237298 systemd[1]: cri-containerd-0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13.scope: Deactivated successfully. Sep 4 05:21:53.257023 containerd[1593]: time="2025-09-04T05:21:53.256971898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" id:\"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" pid:2934 exit_status:137 exited_at:{seconds:1756963313 nanos:243686271}" Sep 4 05:21:53.258223 containerd[1593]: time="2025-09-04T05:21:53.257726525Z" level=info msg="received exit event sandbox_id:\"3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7\" exit_status:137 exited_at:{seconds:1756963313 nanos:8684570}" Sep 4 05:21:53.259461 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7-shm.mount: Deactivated successfully. Sep 4 05:21:53.267112 containerd[1593]: time="2025-09-04T05:21:53.267051591Z" level=info msg="TearDown network for sandbox \"3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7\" successfully" Sep 4 05:21:53.267112 containerd[1593]: time="2025-09-04T05:21:53.267092068Z" level=info msg="StopPodSandbox for \"3a7630ff8d956f367acfb73b09d38c86d63c032ec0aecd40d46bcf96160f7af7\" returns successfully" Sep 4 05:21:53.274725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13-rootfs.mount: Deactivated successfully. Sep 4 05:21:53.278114 containerd[1593]: time="2025-09-04T05:21:53.278058099Z" level=info msg="received exit event sandbox_id:\"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" exit_status:137 exited_at:{seconds:1756963313 nanos:243686271}" Sep 4 05:21:53.278316 containerd[1593]: time="2025-09-04T05:21:53.278268940Z" level=info msg="shim disconnected" id=0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13 namespace=k8s.io Sep 4 05:21:53.278316 containerd[1593]: time="2025-09-04T05:21:53.278295371Z" level=warning msg="cleaning up after shim disconnected" id=0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13 namespace=k8s.io Sep 4 05:21:53.278430 containerd[1593]: time="2025-09-04T05:21:53.278302714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 05:21:53.279923 containerd[1593]: time="2025-09-04T05:21:53.279893504Z" level=info msg="TearDown network for sandbox \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" successfully" Sep 4 05:21:53.279923 containerd[1593]: time="2025-09-04T05:21:53.279915616Z" level=info msg="StopPodSandbox for \"0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13\" returns successfully" Sep 4 05:21:53.406318 kubelet[2736]: I0904 05:21:53.406249 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8rdc\" (UniqueName: \"kubernetes.io/projected/82744d0c-dd68-43e3-9f7d-fa17285199ee-kube-api-access-b8rdc\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.406318 kubelet[2736]: I0904 05:21:53.406295 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cni-path\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.406318 kubelet[2736]: I0904 05:21:53.406313 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-xtables-lock\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.406318 kubelet[2736]: I0904 05:21:53.406327 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-hostproc\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.406318 kubelet[2736]: I0904 05:21:53.406340 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-bpf-maps\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.407024 kubelet[2736]: I0904 05:21:53.406357 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e73116b9-cf2a-4a5a-8640-1a0f1b531df6-cilium-config-path\") pod \"e73116b9-cf2a-4a5a-8640-1a0f1b531df6\" (UID: \"e73116b9-cf2a-4a5a-8640-1a0f1b531df6\") " Sep 4 05:21:53.407024 kubelet[2736]: I0904 05:21:53.406371 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-cgroup\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.407024 kubelet[2736]: I0904 05:21:53.406400 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-host-proc-sys-kernel\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.407024 kubelet[2736]: I0904 05:21:53.406417 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82744d0c-dd68-43e3-9f7d-fa17285199ee-hubble-tls\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.407024 kubelet[2736]: I0904 05:21:53.406436 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82744d0c-dd68-43e3-9f7d-fa17285199ee-clustermesh-secrets\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.407024 kubelet[2736]: I0904 05:21:53.406450 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-host-proc-sys-net\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.407222 kubelet[2736]: I0904 05:21:53.406468 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-config-path\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.407222 kubelet[2736]: I0904 05:21:53.406483 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-run\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.407222 kubelet[2736]: I0904 05:21:53.406498 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-etc-cni-netd\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.407222 kubelet[2736]: I0904 05:21:53.406514 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-lib-modules\") pod \"82744d0c-dd68-43e3-9f7d-fa17285199ee\" (UID: \"82744d0c-dd68-43e3-9f7d-fa17285199ee\") " Sep 4 05:21:53.407222 kubelet[2736]: I0904 05:21:53.406531 2736 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp9zq\" (UniqueName: \"kubernetes.io/projected/e73116b9-cf2a-4a5a-8640-1a0f1b531df6-kube-api-access-sp9zq\") pod \"e73116b9-cf2a-4a5a-8640-1a0f1b531df6\" (UID: \"e73116b9-cf2a-4a5a-8640-1a0f1b531df6\") " Sep 4 05:21:53.412045 kubelet[2736]: I0904 05:21:53.406485 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-hostproc" (OuterVolumeSpecName: "hostproc") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 05:21:53.412045 kubelet[2736]: I0904 05:21:53.406529 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cni-path" (OuterVolumeSpecName: "cni-path") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 05:21:53.412045 kubelet[2736]: I0904 05:21:53.406542 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 05:21:53.412045 kubelet[2736]: I0904 05:21:53.406765 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 05:21:53.412045 kubelet[2736]: I0904 05:21:53.406785 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 05:21:53.412334 kubelet[2736]: I0904 05:21:53.411356 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e73116b9-cf2a-4a5a-8640-1a0f1b531df6-kube-api-access-sp9zq" (OuterVolumeSpecName: "kube-api-access-sp9zq") pod "e73116b9-cf2a-4a5a-8640-1a0f1b531df6" (UID: "e73116b9-cf2a-4a5a-8640-1a0f1b531df6"). InnerVolumeSpecName "kube-api-access-sp9zq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 05:21:53.412334 kubelet[2736]: I0904 05:21:53.411402 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 05:21:53.412334 kubelet[2736]: I0904 05:21:53.411414 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 05:21:53.412334 kubelet[2736]: I0904 05:21:53.411503 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 05:21:53.412334 kubelet[2736]: I0904 05:21:53.411520 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 05:21:53.412582 kubelet[2736]: I0904 05:21:53.411704 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 05:21:53.412582 kubelet[2736]: I0904 05:21:53.411754 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 05:21:53.412582 kubelet[2736]: I0904 05:21:53.411927 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82744d0c-dd68-43e3-9f7d-fa17285199ee-kube-api-access-b8rdc" (OuterVolumeSpecName: "kube-api-access-b8rdc") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "kube-api-access-b8rdc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 05:21:53.412666 kubelet[2736]: I0904 05:21:53.412629 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82744d0c-dd68-43e3-9f7d-fa17285199ee-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 05:21:53.414608 kubelet[2736]: I0904 05:21:53.414559 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e73116b9-cf2a-4a5a-8640-1a0f1b531df6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e73116b9-cf2a-4a5a-8640-1a0f1b531df6" (UID: "e73116b9-cf2a-4a5a-8640-1a0f1b531df6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 05:21:53.414984 kubelet[2736]: I0904 05:21:53.414951 2736 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82744d0c-dd68-43e3-9f7d-fa17285199ee-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "82744d0c-dd68-43e3-9f7d-fa17285199ee" (UID: "82744d0c-dd68-43e3-9f7d-fa17285199ee"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 05:21:53.506903 kubelet[2736]: I0904 05:21:53.506764 2736 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.506903 kubelet[2736]: I0904 05:21:53.506793 2736 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.506903 kubelet[2736]: I0904 05:21:53.506804 2736 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82744d0c-dd68-43e3-9f7d-fa17285199ee-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.506903 kubelet[2736]: I0904 05:21:53.506812 2736 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82744d0c-dd68-43e3-9f7d-fa17285199ee-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.506903 kubelet[2736]: I0904 05:21:53.506821 2736 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.506903 kubelet[2736]: I0904 05:21:53.506829 2736 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.506903 kubelet[2736]: I0904 05:21:53.506839 2736 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sp9zq\" (UniqueName: \"kubernetes.io/projected/e73116b9-cf2a-4a5a-8640-1a0f1b531df6-kube-api-access-sp9zq\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.506903 kubelet[2736]: I0904 05:21:53.506848 2736 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.507248 kubelet[2736]: I0904 05:21:53.506857 2736 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.507248 kubelet[2736]: I0904 05:21:53.506865 2736 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.507248 kubelet[2736]: I0904 05:21:53.506873 2736 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.507248 kubelet[2736]: I0904 05:21:53.506880 2736 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b8rdc\" (UniqueName: \"kubernetes.io/projected/82744d0c-dd68-43e3-9f7d-fa17285199ee-kube-api-access-b8rdc\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.507248 kubelet[2736]: I0904 05:21:53.506888 2736 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.507248 kubelet[2736]: I0904 05:21:53.506897 2736 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.507248 kubelet[2736]: I0904 05:21:53.506905 2736 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82744d0c-dd68-43e3-9f7d-fa17285199ee-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.507248 kubelet[2736]: I0904 05:21:53.506913 2736 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e73116b9-cf2a-4a5a-8640-1a0f1b531df6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 05:21:53.915254 kubelet[2736]: I0904 05:21:53.915211 2736 scope.go:117] "RemoveContainer" containerID="b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644" Sep 4 05:21:53.918141 containerd[1593]: time="2025-09-04T05:21:53.917621383Z" level=info msg="RemoveContainer for \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\"" Sep 4 05:21:53.924118 systemd[1]: Removed slice kubepods-besteffort-pode73116b9_cf2a_4a5a_8640_1a0f1b531df6.slice - libcontainer container kubepods-besteffort-pode73116b9_cf2a_4a5a_8640_1a0f1b531df6.slice. Sep 4 05:21:53.928025 containerd[1593]: time="2025-09-04T05:21:53.927902199Z" level=info msg="RemoveContainer for \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" returns successfully" Sep 4 05:21:53.928386 kubelet[2736]: I0904 05:21:53.928338 2736 scope.go:117] "RemoveContainer" containerID="b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644" Sep 4 05:21:53.932447 systemd[1]: Removed slice kubepods-burstable-pod82744d0c_dd68_43e3_9f7d_fa17285199ee.slice - libcontainer container kubepods-burstable-pod82744d0c_dd68_43e3_9f7d_fa17285199ee.slice. Sep 4 05:21:53.932796 systemd[1]: kubepods-burstable-pod82744d0c_dd68_43e3_9f7d_fa17285199ee.slice: Consumed 6.823s CPU time, 125.8M memory peak, 236K read from disk, 13.3M written to disk. Sep 4 05:21:53.940637 containerd[1593]: time="2025-09-04T05:21:53.929022923Z" level=error msg="ContainerStatus for \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\": not found" Sep 4 05:21:53.941754 kubelet[2736]: E0904 05:21:53.941708 2736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\": not found" containerID="b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644" Sep 4 05:21:53.941922 kubelet[2736]: I0904 05:21:53.941753 2736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644"} err="failed to get container status \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1f0d81d812b90eb0299082104413906edac1f64b100f0f61da24f3977282644\": not found" Sep 4 05:21:53.941922 kubelet[2736]: I0904 05:21:53.941842 2736 scope.go:117] "RemoveContainer" containerID="708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0" Sep 4 05:21:53.944124 containerd[1593]: time="2025-09-04T05:21:53.944087157Z" level=info msg="RemoveContainer for \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\"" Sep 4 05:21:53.949794 containerd[1593]: time="2025-09-04T05:21:53.949751052Z" level=info msg="RemoveContainer for \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" returns successfully" Sep 4 05:21:53.950035 kubelet[2736]: I0904 05:21:53.949984 2736 scope.go:117] "RemoveContainer" containerID="d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5" Sep 4 05:21:53.952174 containerd[1593]: time="2025-09-04T05:21:53.952119622Z" level=info msg="RemoveContainer for \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\"" Sep 4 05:21:53.956384 containerd[1593]: time="2025-09-04T05:21:53.956349877Z" level=info msg="RemoveContainer for \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\" returns successfully" Sep 4 05:21:53.956519 kubelet[2736]: I0904 05:21:53.956491 2736 scope.go:117] "RemoveContainer" containerID="90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88" Sep 4 05:21:53.958366 containerd[1593]: time="2025-09-04T05:21:53.958337641Z" level=info msg="RemoveContainer for \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\"" Sep 4 05:21:53.962454 containerd[1593]: time="2025-09-04T05:21:53.962411038Z" level=info msg="RemoveContainer for \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\" returns successfully" Sep 4 05:21:53.962563 kubelet[2736]: I0904 05:21:53.962540 2736 scope.go:117] "RemoveContainer" containerID="1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998" Sep 4 05:21:53.963711 containerd[1593]: time="2025-09-04T05:21:53.963678661Z" level=info msg="RemoveContainer for \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\"" Sep 4 05:21:53.967072 containerd[1593]: time="2025-09-04T05:21:53.967046584Z" level=info msg="RemoveContainer for \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\" returns successfully" Sep 4 05:21:53.967197 kubelet[2736]: I0904 05:21:53.967175 2736 scope.go:117] "RemoveContainer" containerID="ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f" Sep 4 05:21:53.968451 containerd[1593]: time="2025-09-04T05:21:53.968415842Z" level=info msg="RemoveContainer for \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\"" Sep 4 05:21:53.972542 containerd[1593]: time="2025-09-04T05:21:53.972507874Z" level=info msg="RemoveContainer for \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\" returns successfully" Sep 4 05:21:53.972709 kubelet[2736]: I0904 05:21:53.972681 2736 scope.go:117] "RemoveContainer" containerID="708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0" Sep 4 05:21:53.972910 containerd[1593]: time="2025-09-04T05:21:53.972879962Z" level=error msg="ContainerStatus for \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\": not found" Sep 4 05:21:53.973042 kubelet[2736]: E0904 05:21:53.973010 2736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\": not found" containerID="708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0" Sep 4 05:21:53.973083 kubelet[2736]: I0904 05:21:53.973050 2736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0"} err="failed to get container status \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\": rpc error: code = NotFound desc = an error occurred when try to find container \"708c8c4dd3aada32182bc81b1740209674583a32db9a9950751a6cdc91c27eb0\": not found" Sep 4 05:21:53.973112 kubelet[2736]: I0904 05:21:53.973083 2736 scope.go:117] "RemoveContainer" containerID="d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5" Sep 4 05:21:53.973290 containerd[1593]: time="2025-09-04T05:21:53.973248985Z" level=error msg="ContainerStatus for \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\": not found" Sep 4 05:21:53.973420 kubelet[2736]: E0904 05:21:53.973392 2736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\": not found" containerID="d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5" Sep 4 05:21:53.973551 kubelet[2736]: I0904 05:21:53.973515 2736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5"} err="failed to get container status \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d99a8db96111c81bd69a2c424e62c4715b904da8acf676884d010357c1c2d8c5\": not found" Sep 4 05:21:53.973551 kubelet[2736]: I0904 05:21:53.973537 2736 scope.go:117] "RemoveContainer" containerID="90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88" Sep 4 05:21:53.973727 containerd[1593]: time="2025-09-04T05:21:53.973691507Z" level=error msg="ContainerStatus for \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\": not found" Sep 4 05:21:53.973879 kubelet[2736]: E0904 05:21:53.973850 2736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\": not found" containerID="90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88" Sep 4 05:21:53.973951 kubelet[2736]: I0904 05:21:53.973880 2736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88"} err="failed to get container status \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\": rpc error: code = NotFound desc = an error occurred when try to find container \"90b144b6c90b81a45cd839f70cff55eda0ff497854fa60026738b091c2ed7f88\": not found" Sep 4 05:21:53.973951 kubelet[2736]: I0904 05:21:53.973906 2736 scope.go:117] "RemoveContainer" containerID="1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998" Sep 4 05:21:53.974172 containerd[1593]: time="2025-09-04T05:21:53.974044239Z" level=error msg="ContainerStatus for \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\": not found" Sep 4 05:21:53.974304 kubelet[2736]: E0904 05:21:53.974276 2736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\": not found" containerID="1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998" Sep 4 05:21:53.974361 kubelet[2736]: I0904 05:21:53.974300 2736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998"} err="failed to get container status \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c1c9cb31895b6049fc714b0459a671757b2b5046bcbb319fbe7f9bd3acda998\": not found" Sep 4 05:21:53.974361 kubelet[2736]: I0904 05:21:53.974317 2736 scope.go:117] "RemoveContainer" containerID="ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f" Sep 4 05:21:53.974533 containerd[1593]: time="2025-09-04T05:21:53.974505377Z" level=error msg="ContainerStatus for \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\": not found" Sep 4 05:21:53.974771 kubelet[2736]: E0904 05:21:53.974750 2736 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\": not found" containerID="ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f" Sep 4 05:21:53.974816 kubelet[2736]: I0904 05:21:53.974770 2736 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f"} err="failed to get container status \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff1d6c546e52f959f9a618bfe72acb4b4e9434e1a1e3704c85823588cb11a55f\": not found" Sep 4 05:21:53.977533 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0eb708b84c252c9d27fca3eb53bf2371fc42e66204adc4adcd980e718aa12a13-shm.mount: Deactivated successfully. Sep 4 05:21:53.977639 systemd[1]: var-lib-kubelet-pods-e73116b9\x2dcf2a\x2d4a5a\x2d8640\x2d1a0f1b531df6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsp9zq.mount: Deactivated successfully. Sep 4 05:21:53.977712 systemd[1]: var-lib-kubelet-pods-82744d0c\x2ddd68\x2d43e3\x2d9f7d\x2dfa17285199ee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db8rdc.mount: Deactivated successfully. Sep 4 05:21:53.977784 systemd[1]: var-lib-kubelet-pods-82744d0c\x2ddd68\x2d43e3\x2d9f7d\x2dfa17285199ee-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 05:21:53.977865 systemd[1]: var-lib-kubelet-pods-82744d0c\x2ddd68\x2d43e3\x2d9f7d\x2dfa17285199ee-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 05:21:54.864341 sshd[4367]: Connection closed by 10.0.0.1 port 59286 Sep 4 05:21:54.864898 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:54.873074 systemd[1]: sshd@23-10.0.0.60:22-10.0.0.1:59286.service: Deactivated successfully. Sep 4 05:21:54.874933 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 05:21:54.875696 systemd-logind[1582]: Session 24 logged out. Waiting for processes to exit. Sep 4 05:21:54.878616 systemd[1]: Started sshd@24-10.0.0.60:22-10.0.0.1:59296.service - OpenSSH per-connection server daemon (10.0.0.1:59296). Sep 4 05:21:54.879473 systemd-logind[1582]: Removed session 24. Sep 4 05:21:54.933507 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 59296 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:54.935047 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:54.939519 systemd-logind[1582]: New session 25 of user core. Sep 4 05:21:54.955510 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 05:21:55.147251 kubelet[2736]: I0904 05:21:55.147196 2736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82744d0c-dd68-43e3-9f7d-fa17285199ee" path="/var/lib/kubelet/pods/82744d0c-dd68-43e3-9f7d-fa17285199ee/volumes" Sep 4 05:21:55.148079 kubelet[2736]: I0904 05:21:55.148049 2736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e73116b9-cf2a-4a5a-8640-1a0f1b531df6" path="/var/lib/kubelet/pods/e73116b9-cf2a-4a5a-8640-1a0f1b531df6/volumes" Sep 4 05:21:55.612823 sshd[4519]: Connection closed by 10.0.0.1 port 59296 Sep 4 05:21:55.613153 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:55.625917 systemd[1]: sshd@24-10.0.0.60:22-10.0.0.1:59296.service: Deactivated successfully. Sep 4 05:21:55.630132 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 05:21:55.631601 systemd-logind[1582]: Session 25 logged out. Waiting for processes to exit. Sep 4 05:21:55.637812 systemd[1]: Started sshd@25-10.0.0.60:22-10.0.0.1:59306.service - OpenSSH per-connection server daemon (10.0.0.1:59306). Sep 4 05:21:55.642276 systemd-logind[1582]: Removed session 25. Sep 4 05:21:55.643119 kubelet[2736]: I0904 05:21:55.643081 2736 memory_manager.go:355] "RemoveStaleState removing state" podUID="82744d0c-dd68-43e3-9f7d-fa17285199ee" containerName="cilium-agent" Sep 4 05:21:55.643119 kubelet[2736]: I0904 05:21:55.643116 2736 memory_manager.go:355] "RemoveStaleState removing state" podUID="e73116b9-cf2a-4a5a-8640-1a0f1b531df6" containerName="cilium-operator" Sep 4 05:21:55.673201 systemd[1]: Created slice kubepods-burstable-pod6c5e6c02_9178_4b6e_97d4_39ae6f6d8697.slice - libcontainer container kubepods-burstable-pod6c5e6c02_9178_4b6e_97d4_39ae6f6d8697.slice. Sep 4 05:21:55.706607 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 59306 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:55.708064 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:55.712759 systemd-logind[1582]: New session 26 of user core. Sep 4 05:21:55.721534 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 05:21:55.772497 sshd[4535]: Connection closed by 10.0.0.1 port 59306 Sep 4 05:21:55.772795 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Sep 4 05:21:55.786249 systemd[1]: sshd@25-10.0.0.60:22-10.0.0.1:59306.service: Deactivated successfully. Sep 4 05:21:55.788115 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 05:21:55.788881 systemd-logind[1582]: Session 26 logged out. Waiting for processes to exit. Sep 4 05:21:55.791722 systemd[1]: Started sshd@26-10.0.0.60:22-10.0.0.1:59312.service - OpenSSH per-connection server daemon (10.0.0.1:59312). Sep 4 05:21:55.792444 systemd-logind[1582]: Removed session 26. Sep 4 05:21:55.817613 kubelet[2736]: I0904 05:21:55.817554 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-cilium-cgroup\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817613 kubelet[2736]: I0904 05:21:55.817596 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-clustermesh-secrets\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817721 kubelet[2736]: I0904 05:21:55.817616 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-cilium-run\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817721 kubelet[2736]: I0904 05:21:55.817630 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-bpf-maps\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817721 kubelet[2736]: I0904 05:21:55.817651 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-host-proc-sys-kernel\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817721 kubelet[2736]: I0904 05:21:55.817664 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-lib-modules\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817721 kubelet[2736]: I0904 05:21:55.817676 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-xtables-lock\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817721 kubelet[2736]: I0904 05:21:55.817691 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-host-proc-sys-net\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817846 kubelet[2736]: I0904 05:21:55.817705 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-cni-path\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817846 kubelet[2736]: I0904 05:21:55.817718 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-cilium-config-path\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817846 kubelet[2736]: I0904 05:21:55.817735 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-hubble-tls\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817846 kubelet[2736]: I0904 05:21:55.817751 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrhd\" (UniqueName: \"kubernetes.io/projected/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-kube-api-access-gxrhd\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817846 kubelet[2736]: I0904 05:21:55.817766 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-etc-cni-netd\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817846 kubelet[2736]: I0904 05:21:55.817780 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-hostproc\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.817983 kubelet[2736]: I0904 05:21:55.817799 2736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6c5e6c02-9178-4b6e-97d4-39ae6f6d8697-cilium-ipsec-secrets\") pod \"cilium-rwwm4\" (UID: \"6c5e6c02-9178-4b6e-97d4-39ae6f6d8697\") " pod="kube-system/cilium-rwwm4" Sep 4 05:21:55.855786 sshd[4542]: Accepted publickey for core from 10.0.0.1 port 59312 ssh2: RSA SHA256:Ny8nYDOBhPv0PH6gzvqXa8DSRfbQSyp+8RjA0Ibmoyo Sep 4 05:21:55.857677 sshd-session[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 05:21:55.862782 systemd-logind[1582]: New session 27 of user core. Sep 4 05:21:55.878524 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 05:21:55.977921 containerd[1593]: time="2025-09-04T05:21:55.977840440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwwm4,Uid:6c5e6c02-9178-4b6e-97d4-39ae6f6d8697,Namespace:kube-system,Attempt:0,}" Sep 4 05:21:56.000649 containerd[1593]: time="2025-09-04T05:21:56.000537317Z" level=info msg="connecting to shim 62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d" address="unix:///run/containerd/s/b6f9ff0c537131a4b784515adb1013f9222d36ac684e0b2e6a81006675c78fe5" namespace=k8s.io protocol=ttrpc version=3 Sep 4 05:21:56.032718 systemd[1]: Started cri-containerd-62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d.scope - libcontainer container 62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d. Sep 4 05:21:56.065840 containerd[1593]: time="2025-09-04T05:21:56.065780979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rwwm4,Uid:6c5e6c02-9178-4b6e-97d4-39ae6f6d8697,Namespace:kube-system,Attempt:0,} returns sandbox id \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\"" Sep 4 05:21:56.069103 containerd[1593]: time="2025-09-04T05:21:56.068522042Z" level=info msg="CreateContainer within sandbox \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 05:21:56.075855 containerd[1593]: time="2025-09-04T05:21:56.075808663Z" level=info msg="Container 6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:21:56.083314 containerd[1593]: time="2025-09-04T05:21:56.083254005Z" level=info msg="CreateContainer within sandbox \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534\"" Sep 4 05:21:56.083732 containerd[1593]: time="2025-09-04T05:21:56.083702478Z" level=info msg="StartContainer for \"6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534\"" Sep 4 05:21:56.084483 containerd[1593]: time="2025-09-04T05:21:56.084456973Z" level=info msg="connecting to shim 6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534" address="unix:///run/containerd/s/b6f9ff0c537131a4b784515adb1013f9222d36ac684e0b2e6a81006675c78fe5" protocol=ttrpc version=3 Sep 4 05:21:56.106531 systemd[1]: Started cri-containerd-6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534.scope - libcontainer container 6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534. Sep 4 05:21:56.137351 containerd[1593]: time="2025-09-04T05:21:56.137224633Z" level=info msg="StartContainer for \"6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534\" returns successfully" Sep 4 05:21:56.145738 systemd[1]: cri-containerd-6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534.scope: Deactivated successfully. Sep 4 05:21:56.148759 containerd[1593]: time="2025-09-04T05:21:56.148713326Z" level=info msg="received exit event container_id:\"6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534\" id:\"6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534\" pid:4615 exited_at:{seconds:1756963316 nanos:148392256}" Sep 4 05:21:56.148861 containerd[1593]: time="2025-09-04T05:21:56.148813858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534\" id:\"6b89dd53aa1893ae5f5c80af41c0e19323890d68168c54d85bdaf648e3cac534\" pid:4615 exited_at:{seconds:1756963316 nanos:148392256}" Sep 4 05:21:56.271020 kubelet[2736]: E0904 05:21:56.270965 2736 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 05:21:56.935502 containerd[1593]: time="2025-09-04T05:21:56.935426102Z" level=info msg="CreateContainer within sandbox \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 05:21:56.945796 containerd[1593]: time="2025-09-04T05:21:56.944604230Z" level=info msg="Container b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:21:56.956681 containerd[1593]: time="2025-09-04T05:21:56.956557606Z" level=info msg="CreateContainer within sandbox \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49\"" Sep 4 05:21:56.957171 containerd[1593]: time="2025-09-04T05:21:56.957126368Z" level=info msg="StartContainer for \"b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49\"" Sep 4 05:21:56.958123 containerd[1593]: time="2025-09-04T05:21:56.958097295Z" level=info msg="connecting to shim b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49" address="unix:///run/containerd/s/b6f9ff0c537131a4b784515adb1013f9222d36ac684e0b2e6a81006675c78fe5" protocol=ttrpc version=3 Sep 4 05:21:56.984742 systemd[1]: Started cri-containerd-b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49.scope - libcontainer container b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49. Sep 4 05:21:57.016447 containerd[1593]: time="2025-09-04T05:21:57.016402210Z" level=info msg="StartContainer for \"b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49\" returns successfully" Sep 4 05:21:57.023393 systemd[1]: cri-containerd-b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49.scope: Deactivated successfully. Sep 4 05:21:57.024210 containerd[1593]: time="2025-09-04T05:21:57.024127780Z" level=info msg="received exit event container_id:\"b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49\" id:\"b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49\" pid:4661 exited_at:{seconds:1756963317 nanos:23897522}" Sep 4 05:21:57.024971 containerd[1593]: time="2025-09-04T05:21:57.024536767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49\" id:\"b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49\" pid:4661 exited_at:{seconds:1756963317 nanos:23897522}" Sep 4 05:21:57.044129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8f5e6048a483ec5b4f0ae1958ef066c14147c8e2f8fcd2bad0f862bd2905e49-rootfs.mount: Deactivated successfully. Sep 4 05:21:57.941519 containerd[1593]: time="2025-09-04T05:21:57.941441253Z" level=info msg="CreateContainer within sandbox \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 05:21:57.952860 containerd[1593]: time="2025-09-04T05:21:57.952792547Z" level=info msg="Container 7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:21:57.964306 containerd[1593]: time="2025-09-04T05:21:57.964246166Z" level=info msg="CreateContainer within sandbox \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1\"" Sep 4 05:21:57.965048 containerd[1593]: time="2025-09-04T05:21:57.965005370Z" level=info msg="StartContainer for \"7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1\"" Sep 4 05:21:57.966720 containerd[1593]: time="2025-09-04T05:21:57.966683801Z" level=info msg="connecting to shim 7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1" address="unix:///run/containerd/s/b6f9ff0c537131a4b784515adb1013f9222d36ac684e0b2e6a81006675c78fe5" protocol=ttrpc version=3 Sep 4 05:21:57.991646 systemd[1]: Started cri-containerd-7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1.scope - libcontainer container 7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1. Sep 4 05:21:58.043897 systemd[1]: cri-containerd-7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1.scope: Deactivated successfully. Sep 4 05:21:58.045953 containerd[1593]: time="2025-09-04T05:21:58.045915213Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1\" id:\"7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1\" pid:4706 exited_at:{seconds:1756963318 nanos:45637826}" Sep 4 05:21:58.057488 containerd[1593]: time="2025-09-04T05:21:58.057366881Z" level=info msg="received exit event container_id:\"7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1\" id:\"7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1\" pid:4706 exited_at:{seconds:1756963318 nanos:45637826}" Sep 4 05:21:58.060085 containerd[1593]: time="2025-09-04T05:21:58.060045804Z" level=info msg="StartContainer for \"7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1\" returns successfully" Sep 4 05:21:58.083308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b112b3106d3c3fd12ae564fcde4245626837492e3848333870b4dfc096936f1-rootfs.mount: Deactivated successfully. Sep 4 05:21:58.944054 containerd[1593]: time="2025-09-04T05:21:58.943996223Z" level=info msg="CreateContainer within sandbox \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 05:21:58.953405 containerd[1593]: time="2025-09-04T05:21:58.953215558Z" level=info msg="Container da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:21:58.961074 containerd[1593]: time="2025-09-04T05:21:58.961020675Z" level=info msg="CreateContainer within sandbox \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000\"" Sep 4 05:21:58.961483 containerd[1593]: time="2025-09-04T05:21:58.961451313Z" level=info msg="StartContainer for \"da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000\"" Sep 4 05:21:58.962526 containerd[1593]: time="2025-09-04T05:21:58.962496731Z" level=info msg="connecting to shim da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000" address="unix:///run/containerd/s/b6f9ff0c537131a4b784515adb1013f9222d36ac684e0b2e6a81006675c78fe5" protocol=ttrpc version=3 Sep 4 05:21:58.986563 systemd[1]: Started cri-containerd-da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000.scope - libcontainer container da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000. Sep 4 05:21:59.015054 systemd[1]: cri-containerd-da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000.scope: Deactivated successfully. Sep 4 05:21:59.015602 containerd[1593]: time="2025-09-04T05:21:59.015565109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000\" id:\"da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000\" pid:4744 exited_at:{seconds:1756963319 nanos:15251053}" Sep 4 05:21:59.017535 containerd[1593]: time="2025-09-04T05:21:59.017476833Z" level=info msg="received exit event container_id:\"da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000\" id:\"da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000\" pid:4744 exited_at:{seconds:1756963319 nanos:15251053}" Sep 4 05:21:59.018943 containerd[1593]: time="2025-09-04T05:21:59.018916568Z" level=info msg="StartContainer for \"da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000\" returns successfully" Sep 4 05:21:59.040042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da6e874f6b4305e638baa151a5dc2b99867c6fbf7c149c2efb0f9748b2071000-rootfs.mount: Deactivated successfully. Sep 4 05:21:59.949467 containerd[1593]: time="2025-09-04T05:21:59.949414676Z" level=info msg="CreateContainer within sandbox \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 05:22:00.161733 containerd[1593]: time="2025-09-04T05:22:00.161684774Z" level=info msg="Container 952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549: CDI devices from CRI Config.CDIDevices: []" Sep 4 05:22:00.336340 containerd[1593]: time="2025-09-04T05:22:00.336184109Z" level=info msg="CreateContainer within sandbox \"62f6ba3a2188b674400bf1522cadf3c1a315357f38c46a360a22919142ed547d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549\"" Sep 4 05:22:00.337060 containerd[1593]: time="2025-09-04T05:22:00.337016781Z" level=info msg="StartContainer for \"952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549\"" Sep 4 05:22:00.338331 containerd[1593]: time="2025-09-04T05:22:00.338292825Z" level=info msg="connecting to shim 952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549" address="unix:///run/containerd/s/b6f9ff0c537131a4b784515adb1013f9222d36ac684e0b2e6a81006675c78fe5" protocol=ttrpc version=3 Sep 4 05:22:00.368567 systemd[1]: Started cri-containerd-952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549.scope - libcontainer container 952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549. Sep 4 05:22:00.410886 containerd[1593]: time="2025-09-04T05:22:00.410832021Z" level=info msg="StartContainer for \"952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549\" returns successfully" Sep 4 05:22:00.494667 containerd[1593]: time="2025-09-04T05:22:00.494592555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549\" id:\"5e839d8b6a7ea8d56bff0bbaad5755051b573ae0a4d48b2168ed77989717b689\" pid:4812 exited_at:{seconds:1756963320 nanos:486928545}" Sep 4 05:22:00.950473 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 4 05:22:00.991846 kubelet[2736]: I0904 05:22:00.991571 2736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rwwm4" podStartSLOduration=5.991530932 podStartE2EDuration="5.991530932s" podCreationTimestamp="2025-09-04 05:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 05:22:00.988399583 +0000 UTC m=+89.948721188" watchObservedRunningTime="2025-09-04 05:22:00.991530932 +0000 UTC m=+89.951852547" Sep 4 05:22:02.777323 containerd[1593]: time="2025-09-04T05:22:02.777253012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549\" id:\"67ba3ce8d093e49046febd34ba08baa0a85a1f511cc2f0d14b6b0d4e18a8544d\" pid:4941 exit_status:1 exited_at:{seconds:1756963322 nanos:776907216}" Sep 4 05:22:04.392726 systemd-networkd[1492]: lxc_health: Link UP Sep 4 05:22:04.393274 systemd-networkd[1492]: lxc_health: Gained carrier Sep 4 05:22:04.902606 containerd[1593]: time="2025-09-04T05:22:04.902546072Z" level=info msg="TaskExit event in podsandbox handler container_id:\"952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549\" id:\"0d3de411bb3109c7bc7e2ce5d71ef1a67e5cbeef446d025816a562f645ebdc64\" pid:5343 exited_at:{seconds:1756963324 nanos:902019925}" Sep 4 05:22:05.816683 systemd-networkd[1492]: lxc_health: Gained IPv6LL Sep 4 05:22:07.068730 containerd[1593]: time="2025-09-04T05:22:07.068660971Z" level=info msg="TaskExit event in podsandbox handler container_id:\"952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549\" id:\"02225a565ae15ba6952dfa0d2973194e56290f708971f5755455ecafe457aba7\" pid:5382 exited_at:{seconds:1756963327 nanos:68161354}" Sep 4 05:22:09.169744 containerd[1593]: time="2025-09-04T05:22:09.169689802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549\" id:\"41fd84739aa8231c237efeb04e547abe1dd365cef4b06c087aaecf4c9e94f2d6\" pid:5413 exited_at:{seconds:1756963329 nanos:169264707}" Sep 4 05:22:11.261278 containerd[1593]: time="2025-09-04T05:22:11.261218790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"952dbb402f2feb403388e79eb48285d86ac202703ce4a55bc47aa61bd89e5549\" id:\"847572f4218f81ac27aa6b9c7439dac0b10f74d81d950a37d103c45c7d0b2bdb\" pid:5437 exited_at:{seconds:1756963331 nanos:260784527}" Sep 4 05:22:11.278737 sshd[4545]: Connection closed by 10.0.0.1 port 59312 Sep 4 05:22:11.279566 sshd-session[4542]: pam_unix(sshd:session): session closed for user core Sep 4 05:22:11.284901 systemd[1]: sshd@26-10.0.0.60:22-10.0.0.1:59312.service: Deactivated successfully. Sep 4 05:22:11.287258 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 05:22:11.288311 systemd-logind[1582]: Session 27 logged out. Waiting for processes to exit. Sep 4 05:22:11.290109 systemd-logind[1582]: Removed session 27.