Sep 4 04:25:45.920904 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 02:15:54 -00 2025 Sep 4 04:25:45.920948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d1884c9a158af3462973a912ddb17d2a643da411fd9cba6f05e0fc855c1b0a44 Sep 4 04:25:45.920976 kernel: BIOS-provided physical RAM map: Sep 4 04:25:45.920985 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 4 04:25:45.920994 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 4 04:25:45.921003 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 4 04:25:45.921013 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 4 04:25:45.921027 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 4 04:25:45.921040 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 4 04:25:45.921049 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 4 04:25:45.921059 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 4 04:25:45.921067 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 4 04:25:45.921076 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 4 04:25:45.921085 kernel: NX (Execute Disable) protection: active Sep 4 04:25:45.921101 kernel: APIC: Static calls initialized Sep 4 04:25:45.921111 kernel: SMBIOS 2.8 present. Sep 4 04:25:45.921125 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 4 04:25:45.921136 kernel: DMI: Memory slots populated: 1/1 Sep 4 04:25:45.921145 kernel: Hypervisor detected: KVM Sep 4 04:25:45.921155 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 4 04:25:45.921165 kernel: kvm-clock: using sched offset of 4289024029 cycles Sep 4 04:25:45.921176 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 4 04:25:45.921186 kernel: tsc: Detected 2794.750 MHz processor Sep 4 04:25:45.921200 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 4 04:25:45.921211 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 4 04:25:45.921221 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 4 04:25:45.921232 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 4 04:25:45.921242 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 4 04:25:45.921252 kernel: Using GB pages for direct mapping Sep 4 04:25:45.921261 kernel: ACPI: Early table checksum verification disabled Sep 4 04:25:45.921271 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 4 04:25:45.921280 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:45.921295 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:45.921305 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:45.921314 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 4 04:25:45.921324 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:45.921334 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:45.921344 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:45.921354 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 04:25:45.921364 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 4 04:25:45.921381 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 4 04:25:45.921391 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 4 04:25:45.921401 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 4 04:25:45.921411 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 4 04:25:45.921422 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 4 04:25:45.921432 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 4 04:25:45.921445 kernel: No NUMA configuration found Sep 4 04:25:45.921455 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 4 04:25:45.921464 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 4 04:25:45.921475 kernel: Zone ranges: Sep 4 04:25:45.921485 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 4 04:25:45.921495 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 4 04:25:45.921505 kernel: Normal empty Sep 4 04:25:45.921515 kernel: Device empty Sep 4 04:25:45.921525 kernel: Movable zone start for each node Sep 4 04:25:45.921539 kernel: Early memory node ranges Sep 4 04:25:45.921549 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 4 04:25:45.921558 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 4 04:25:45.921568 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 4 04:25:45.921578 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 4 04:25:45.921586 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 4 04:25:45.921594 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 4 04:25:45.921601 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 4 04:25:45.921613 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 4 04:25:45.921621 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 4 04:25:45.921631 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 4 04:25:45.921639 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 4 04:25:45.921649 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 4 04:25:45.921656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 4 04:25:45.921664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 4 04:25:45.921671 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 4 04:25:45.921678 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 4 04:25:45.921686 kernel: TSC deadline timer available Sep 4 04:25:45.921693 kernel: CPU topo: Max. logical packages: 1 Sep 4 04:25:45.921704 kernel: CPU topo: Max. logical dies: 1 Sep 4 04:25:45.921714 kernel: CPU topo: Max. dies per package: 1 Sep 4 04:25:45.921725 kernel: CPU topo: Max. threads per core: 1 Sep 4 04:25:45.921734 kernel: CPU topo: Num. cores per package: 4 Sep 4 04:25:45.921744 kernel: CPU topo: Num. threads per package: 4 Sep 4 04:25:45.921755 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 4 04:25:45.921765 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 4 04:25:45.921776 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 4 04:25:45.921786 kernel: kvm-guest: setup PV sched yield Sep 4 04:25:45.921801 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 4 04:25:45.921811 kernel: Booting paravirtualized kernel on KVM Sep 4 04:25:45.921822 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 4 04:25:45.921832 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 4 04:25:45.921843 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 4 04:25:45.921854 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 4 04:25:45.921864 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 4 04:25:45.921874 kernel: kvm-guest: PV spinlocks enabled Sep 4 04:25:45.921884 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 4 04:25:45.921901 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d1884c9a158af3462973a912ddb17d2a643da411fd9cba6f05e0fc855c1b0a44 Sep 4 04:25:45.921912 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 04:25:45.921922 kernel: random: crng init done Sep 4 04:25:45.921933 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 04:25:45.922012 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 04:25:45.922026 kernel: Fallback order for Node 0: 0 Sep 4 04:25:45.922037 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 4 04:25:45.922048 kernel: Policy zone: DMA32 Sep 4 04:25:45.922065 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 04:25:45.922083 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 04:25:45.922094 kernel: ftrace: allocating 40102 entries in 157 pages Sep 4 04:25:45.922104 kernel: ftrace: allocated 157 pages with 5 groups Sep 4 04:25:45.922114 kernel: Dynamic Preempt: voluntary Sep 4 04:25:45.922124 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 04:25:45.922135 kernel: rcu: RCU event tracing is enabled. Sep 4 04:25:45.922146 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 04:25:45.922156 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 04:25:45.922174 kernel: Rude variant of Tasks RCU enabled. Sep 4 04:25:45.922185 kernel: Tracing variant of Tasks RCU enabled. Sep 4 04:25:45.922193 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 04:25:45.922200 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 04:25:45.922208 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 04:25:45.922216 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 04:25:45.922223 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 4 04:25:45.922231 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 4 04:25:45.922239 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 04:25:45.922256 kernel: Console: colour VGA+ 80x25 Sep 4 04:25:45.922264 kernel: printk: legacy console [ttyS0] enabled Sep 4 04:25:45.922271 kernel: ACPI: Core revision 20240827 Sep 4 04:25:45.922282 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 4 04:25:45.922289 kernel: APIC: Switch to symmetric I/O mode setup Sep 4 04:25:45.922297 kernel: x2apic enabled Sep 4 04:25:45.922305 kernel: APIC: Switched APIC routing to: physical x2apic Sep 4 04:25:45.922315 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 4 04:25:45.922324 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 4 04:25:45.922334 kernel: kvm-guest: setup PV IPIs Sep 4 04:25:45.922342 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 4 04:25:45.922350 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 04:25:45.922383 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 4 04:25:45.922391 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 4 04:25:45.922412 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 4 04:25:45.922420 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 4 04:25:45.922428 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 4 04:25:45.922438 kernel: Spectre V2 : Mitigation: Retpolines Sep 4 04:25:45.922446 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 4 04:25:45.922454 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 4 04:25:45.922462 kernel: active return thunk: retbleed_return_thunk Sep 4 04:25:45.922469 kernel: RETBleed: Mitigation: untrained return thunk Sep 4 04:25:45.922477 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 4 04:25:45.922485 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 4 04:25:45.922493 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 4 04:25:45.922504 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 4 04:25:45.922512 kernel: active return thunk: srso_return_thunk Sep 4 04:25:45.922520 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 4 04:25:45.922528 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 4 04:25:45.922536 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 4 04:25:45.922544 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 4 04:25:45.922552 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 4 04:25:45.922560 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 4 04:25:45.922568 kernel: Freeing SMP alternatives memory: 32K Sep 4 04:25:45.922578 kernel: pid_max: default: 32768 minimum: 301 Sep 4 04:25:45.922586 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 4 04:25:45.922593 kernel: landlock: Up and running. Sep 4 04:25:45.922601 kernel: SELinux: Initializing. Sep 4 04:25:45.922613 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 04:25:45.922621 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 04:25:45.922629 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 4 04:25:45.922637 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 4 04:25:45.922645 kernel: ... version: 0 Sep 4 04:25:45.922655 kernel: ... bit width: 48 Sep 4 04:25:45.922662 kernel: ... generic registers: 6 Sep 4 04:25:45.922670 kernel: ... value mask: 0000ffffffffffff Sep 4 04:25:45.922678 kernel: ... max period: 00007fffffffffff Sep 4 04:25:45.922686 kernel: ... fixed-purpose events: 0 Sep 4 04:25:45.922693 kernel: ... event mask: 000000000000003f Sep 4 04:25:45.922701 kernel: signal: max sigframe size: 1776 Sep 4 04:25:45.922709 kernel: rcu: Hierarchical SRCU implementation. Sep 4 04:25:45.922718 kernel: rcu: Max phase no-delay instances is 400. Sep 4 04:25:45.922728 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 4 04:25:45.922736 kernel: smp: Bringing up secondary CPUs ... Sep 4 04:25:45.922744 kernel: smpboot: x86: Booting SMP configuration: Sep 4 04:25:45.922751 kernel: .... node #0, CPUs: #1 #2 #3 Sep 4 04:25:45.922759 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 04:25:45.922767 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 4 04:25:45.922775 kernel: Memory: 2426872K/2571752K available (14336K kernel code, 2428K rwdata, 9988K rodata, 57768K init, 1248K bss, 138952K reserved, 0K cma-reserved) Sep 4 04:25:45.922783 kernel: devtmpfs: initialized Sep 4 04:25:45.922791 kernel: x86/mm: Memory block size: 128MB Sep 4 04:25:45.922801 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 04:25:45.922809 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 04:25:45.922817 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 04:25:45.922825 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 04:25:45.922833 kernel: audit: initializing netlink subsys (disabled) Sep 4 04:25:45.922841 kernel: audit: type=2000 audit(1756959942.463:1): state=initialized audit_enabled=0 res=1 Sep 4 04:25:45.922849 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 04:25:45.922857 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 4 04:25:45.922864 kernel: cpuidle: using governor menu Sep 4 04:25:45.922875 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 04:25:45.922882 kernel: dca service started, version 1.12.1 Sep 4 04:25:45.922890 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 4 04:25:45.922905 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 4 04:25:45.922914 kernel: PCI: Using configuration type 1 for base access Sep 4 04:25:45.922922 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 4 04:25:45.922930 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 04:25:45.922951 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 04:25:45.922977 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 04:25:45.922988 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 04:25:45.922996 kernel: ACPI: Added _OSI(Module Device) Sep 4 04:25:45.923004 kernel: ACPI: Added _OSI(Processor Device) Sep 4 04:25:45.923012 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 04:25:45.923020 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 04:25:45.923028 kernel: ACPI: Interpreter enabled Sep 4 04:25:45.923036 kernel: ACPI: PM: (supports S0 S3 S5) Sep 4 04:25:45.923044 kernel: ACPI: Using IOAPIC for interrupt routing Sep 4 04:25:45.923052 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 4 04:25:45.923062 kernel: PCI: Using E820 reservations for host bridge windows Sep 4 04:25:45.923070 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 4 04:25:45.923078 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 04:25:45.923327 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 04:25:45.923460 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 4 04:25:45.923627 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 4 04:25:45.923640 kernel: PCI host bridge to bus 0000:00 Sep 4 04:25:45.923808 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 4 04:25:45.924000 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 4 04:25:45.924158 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 4 04:25:45.924313 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 4 04:25:45.924466 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 4 04:25:45.924619 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 4 04:25:45.924773 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 04:25:45.925051 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 4 04:25:45.925263 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 4 04:25:45.925435 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 4 04:25:45.925589 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 4 04:25:45.925746 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 4 04:25:45.925904 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 4 04:25:45.926159 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 4 04:25:45.926324 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 4 04:25:45.926481 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 4 04:25:45.926634 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 4 04:25:45.926822 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 4 04:25:45.927024 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 4 04:25:45.927188 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 4 04:25:45.927352 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 4 04:25:45.927540 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 4 04:25:45.927703 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 4 04:25:45.927863 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 4 04:25:45.928069 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 4 04:25:45.928234 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 4 04:25:45.928415 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 4 04:25:45.928586 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 4 04:25:45.928775 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 4 04:25:45.929103 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 4 04:25:45.929327 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 4 04:25:45.929514 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 4 04:25:45.929643 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 4 04:25:45.929660 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 4 04:25:45.929668 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 4 04:25:45.929676 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 4 04:25:45.929684 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 4 04:25:45.929692 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 4 04:25:45.929700 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 4 04:25:45.929708 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 4 04:25:45.929716 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 4 04:25:45.929723 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 4 04:25:45.929734 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 4 04:25:45.929742 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 4 04:25:45.929750 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 4 04:25:45.929757 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 4 04:25:45.929766 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 4 04:25:45.929773 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 4 04:25:45.929781 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 4 04:25:45.929789 kernel: iommu: Default domain type: Translated Sep 4 04:25:45.929797 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 4 04:25:45.929807 kernel: PCI: Using ACPI for IRQ routing Sep 4 04:25:45.929815 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 4 04:25:45.929823 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 4 04:25:45.929831 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 4 04:25:45.929985 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 4 04:25:45.930123 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 4 04:25:45.930250 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 4 04:25:45.930261 kernel: vgaarb: loaded Sep 4 04:25:45.930269 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 4 04:25:45.930282 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 4 04:25:45.930290 kernel: clocksource: Switched to clocksource kvm-clock Sep 4 04:25:45.930298 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 04:25:45.930306 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 04:25:45.930315 kernel: pnp: PnP ACPI init Sep 4 04:25:45.930467 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 4 04:25:45.930480 kernel: pnp: PnP ACPI: found 6 devices Sep 4 04:25:45.930489 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 4 04:25:45.930500 kernel: NET: Registered PF_INET protocol family Sep 4 04:25:45.930508 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 04:25:45.930516 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 04:25:45.930525 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 04:25:45.930533 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 04:25:45.930541 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 04:25:45.930549 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 04:25:45.930557 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 04:25:45.930567 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 04:25:45.930575 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 04:25:45.930583 kernel: NET: Registered PF_XDP protocol family Sep 4 04:25:45.930699 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 4 04:25:45.930812 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 4 04:25:45.930925 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 4 04:25:45.931078 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 4 04:25:45.931205 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 4 04:25:45.931319 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 4 04:25:45.931334 kernel: PCI: CLS 0 bytes, default 64 Sep 4 04:25:45.931342 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Sep 4 04:25:45.931350 kernel: Initialise system trusted keyrings Sep 4 04:25:45.931358 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 04:25:45.931366 kernel: Key type asymmetric registered Sep 4 04:25:45.931374 kernel: Asymmetric key parser 'x509' registered Sep 4 04:25:45.931382 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 04:25:45.931390 kernel: io scheduler mq-deadline registered Sep 4 04:25:45.931398 kernel: io scheduler kyber registered Sep 4 04:25:45.931408 kernel: io scheduler bfq registered Sep 4 04:25:45.931416 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 4 04:25:45.931424 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 4 04:25:45.931432 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 4 04:25:45.931440 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 4 04:25:45.931448 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 04:25:45.931456 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 4 04:25:45.931464 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 4 04:25:45.931472 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 4 04:25:45.931482 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 4 04:25:45.931624 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 4 04:25:45.931636 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 4 04:25:45.931751 kernel: rtc_cmos 00:04: registered as rtc0 Sep 4 04:25:45.931868 kernel: rtc_cmos 00:04: setting system clock to 2025-09-04T04:25:45 UTC (1756959945) Sep 4 04:25:45.932015 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 4 04:25:45.932027 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 4 04:25:45.932035 kernel: NET: Registered PF_INET6 protocol family Sep 4 04:25:45.932047 kernel: Segment Routing with IPv6 Sep 4 04:25:45.932055 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 04:25:45.932063 kernel: NET: Registered PF_PACKET protocol family Sep 4 04:25:45.932071 kernel: Key type dns_resolver registered Sep 4 04:25:45.932080 kernel: IPI shorthand broadcast: enabled Sep 4 04:25:45.932107 kernel: sched_clock: Marking stable (3655006386, 138444959)->(3822168946, -28717601) Sep 4 04:25:45.932118 kernel: registered taskstats version 1 Sep 4 04:25:45.932129 kernel: Loading compiled-in X.509 certificates Sep 4 04:25:45.932140 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: 2c6c093c583f207375cbe16db1a23ce651c8380d' Sep 4 04:25:45.932154 kernel: Demotion targets for Node 0: null Sep 4 04:25:45.932165 kernel: Key type .fscrypt registered Sep 4 04:25:45.932175 kernel: Key type fscrypt-provisioning registered Sep 4 04:25:45.932183 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 04:25:45.932191 kernel: ima: Allocated hash algorithm: sha1 Sep 4 04:25:45.932199 kernel: ima: No architecture policies found Sep 4 04:25:45.932207 kernel: clk: Disabling unused clocks Sep 4 04:25:45.932215 kernel: Warning: unable to open an initial console. Sep 4 04:25:45.932226 kernel: Freeing unused kernel image (initmem) memory: 57768K Sep 4 04:25:45.932234 kernel: Write protecting the kernel read-only data: 24576k Sep 4 04:25:45.932242 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 4 04:25:45.932250 kernel: Run /init as init process Sep 4 04:25:45.932258 kernel: with arguments: Sep 4 04:25:45.932266 kernel: /init Sep 4 04:25:45.932273 kernel: with environment: Sep 4 04:25:45.932281 kernel: HOME=/ Sep 4 04:25:45.932289 kernel: TERM=linux Sep 4 04:25:45.932297 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 04:25:45.932309 systemd[1]: Successfully made /usr/ read-only. Sep 4 04:25:45.932332 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 04:25:45.932343 systemd[1]: Detected virtualization kvm. Sep 4 04:25:45.932352 systemd[1]: Detected architecture x86-64. Sep 4 04:25:45.932360 systemd[1]: Running in initrd. Sep 4 04:25:45.932371 systemd[1]: No hostname configured, using default hostname. Sep 4 04:25:45.932380 systemd[1]: Hostname set to . Sep 4 04:25:45.932388 systemd[1]: Initializing machine ID from VM UUID. Sep 4 04:25:45.932397 systemd[1]: Queued start job for default target initrd.target. Sep 4 04:25:45.932406 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 04:25:45.932414 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 04:25:45.932424 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 04:25:45.932433 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 04:25:45.932443 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 04:25:45.932453 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 04:25:45.932463 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 04:25:45.932472 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 04:25:45.932480 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 04:25:45.932489 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 04:25:45.932500 systemd[1]: Reached target paths.target - Path Units. Sep 4 04:25:45.932509 systemd[1]: Reached target slices.target - Slice Units. Sep 4 04:25:45.932518 systemd[1]: Reached target swap.target - Swaps. Sep 4 04:25:45.932526 systemd[1]: Reached target timers.target - Timer Units. Sep 4 04:25:45.932535 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 04:25:45.932543 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 04:25:45.932552 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 04:25:45.932560 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 4 04:25:45.932569 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 04:25:45.932581 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 04:25:45.932589 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 04:25:45.932598 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 04:25:45.932606 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 04:25:45.932615 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 04:25:45.932628 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 04:25:45.932637 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 4 04:25:45.932646 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 04:25:45.932654 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 04:25:45.932663 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 04:25:45.932672 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:25:45.932714 systemd-journald[219]: Collecting audit messages is disabled. Sep 4 04:25:45.932738 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 04:25:45.932750 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 04:25:45.932763 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 04:25:45.932775 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 04:25:45.932788 systemd-journald[219]: Journal started Sep 4 04:25:45.932810 systemd-journald[219]: Runtime Journal (/run/log/journal/0a86a9de915642d395f96295f43e7a60) is 6M, max 48.6M, 42.5M free. Sep 4 04:25:45.919591 systemd-modules-load[222]: Inserted module 'overlay' Sep 4 04:25:45.945182 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 04:25:45.953013 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 04:25:45.956044 kernel: Bridge firewalling registered Sep 4 04:25:45.955608 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 4 04:25:45.956113 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 04:25:45.959257 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 04:25:45.962212 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 04:25:45.973087 systemd-tmpfiles[237]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 4 04:25:45.975970 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 04:25:46.029833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:46.034059 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 04:25:46.036787 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:25:46.042541 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 04:25:46.044755 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 04:25:46.047179 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 04:25:46.078243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 04:25:46.090415 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 04:25:46.094804 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 04:25:46.111302 systemd-resolved[249]: Positive Trust Anchors: Sep 4 04:25:46.111343 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 04:25:46.111387 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 04:25:46.114905 systemd-resolved[249]: Defaulting to hostname 'linux'. Sep 4 04:25:46.136276 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d1884c9a158af3462973a912ddb17d2a643da411fd9cba6f05e0fc855c1b0a44 Sep 4 04:25:46.116569 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 04:25:46.136400 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 04:25:46.251026 kernel: SCSI subsystem initialized Sep 4 04:25:46.262012 kernel: Loading iSCSI transport class v2.0-870. Sep 4 04:25:46.276024 kernel: iscsi: registered transport (tcp) Sep 4 04:25:46.305013 kernel: iscsi: registered transport (qla4xxx) Sep 4 04:25:46.305108 kernel: QLogic iSCSI HBA Driver Sep 4 04:25:46.330567 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 04:25:46.359637 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 04:25:46.363954 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 04:25:46.438914 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 04:25:46.441052 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 04:25:46.519993 kernel: raid6: avx2x4 gen() 29225 MB/s Sep 4 04:25:46.542010 kernel: raid6: avx2x2 gen() 29096 MB/s Sep 4 04:25:46.559303 kernel: raid6: avx2x1 gen() 17140 MB/s Sep 4 04:25:46.559391 kernel: raid6: using algorithm avx2x4 gen() 29225 MB/s Sep 4 04:25:46.577119 kernel: raid6: .... xor() 6281 MB/s, rmw enabled Sep 4 04:25:46.577208 kernel: raid6: using avx2x2 recovery algorithm Sep 4 04:25:46.603037 kernel: xor: automatically using best checksumming function avx Sep 4 04:25:46.772028 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 04:25:46.784832 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 04:25:46.788850 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 04:25:46.824033 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 4 04:25:46.832165 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 04:25:46.836091 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 04:25:46.862680 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 4 04:25:46.901173 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 04:25:46.904841 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 04:25:47.015887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 04:25:47.020531 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 04:25:47.073015 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 4 04:25:47.111009 kernel: cryptd: max_cpu_qlen set to 1000 Sep 4 04:25:47.123284 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 04:25:47.128073 kernel: libata version 3.00 loaded. Sep 4 04:25:47.135999 kernel: ahci 0000:00:1f.2: version 3.0 Sep 4 04:25:47.136296 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 4 04:25:47.138984 kernel: AES CTR mode by8 optimization enabled Sep 4 04:25:47.139029 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 4 04:25:47.141420 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 4 04:25:47.141750 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 4 04:25:47.164115 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 04:25:47.164205 kernel: GPT:9289727 != 19775487 Sep 4 04:25:47.164221 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 04:25:47.164237 kernel: GPT:9289727 != 19775487 Sep 4 04:25:47.164266 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 04:25:47.164281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:25:47.164297 kernel: scsi host0: ahci Sep 4 04:25:47.164641 kernel: scsi host1: ahci Sep 4 04:25:47.164886 kernel: scsi host2: ahci Sep 4 04:25:47.165125 kernel: scsi host3: ahci Sep 4 04:25:47.165338 kernel: scsi host4: ahci Sep 4 04:25:47.165534 kernel: scsi host5: ahci Sep 4 04:25:47.165720 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 4 04:25:47.165736 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 4 04:25:47.165756 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 4 04:25:47.165770 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 4 04:25:47.165784 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 4 04:25:47.165799 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 4 04:25:47.165812 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 4 04:25:47.157825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 04:25:47.157925 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:47.180000 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:25:47.181385 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:25:47.183249 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 04:25:47.266572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:47.473016 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 4 04:25:47.473097 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 4 04:25:47.473995 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 4 04:25:47.474988 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 4 04:25:47.475018 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 4 04:25:47.475988 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 04:25:47.477407 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 4 04:25:47.477428 kernel: ata3.00: applying bridge limits Sep 4 04:25:47.477986 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 4 04:25:47.478989 kernel: ata3.00: LPM support broken, forcing max_power Sep 4 04:25:47.479004 kernel: ata3.00: configured for UDMA/100 Sep 4 04:25:47.479990 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 4 04:25:47.539820 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 04:25:47.580426 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 4 04:25:47.580789 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 04:25:47.602113 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 4 04:25:47.604659 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 04:25:47.626591 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 04:25:47.636149 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 04:25:47.636705 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 04:25:47.638486 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 04:25:47.974250 disk-uuid[639]: Primary Header is updated. Sep 4 04:25:47.974250 disk-uuid[639]: Secondary Entries is updated. Sep 4 04:25:47.974250 disk-uuid[639]: Secondary Header is updated. Sep 4 04:25:47.981992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:25:47.988005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:25:48.071311 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 04:25:48.096714 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 04:25:48.097252 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 04:25:48.097726 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 04:25:48.099264 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 04:25:48.136250 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 04:25:49.050001 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 04:25:49.050747 disk-uuid[644]: The operation has completed successfully. Sep 4 04:25:49.089691 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 04:25:49.089821 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 04:25:49.129602 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 04:25:49.159246 sh[668]: Success Sep 4 04:25:49.179179 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 04:25:49.179257 kernel: device-mapper: uevent: version 1.0.3 Sep 4 04:25:49.180511 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 4 04:25:49.191995 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 4 04:25:49.225144 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 04:25:49.228481 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 04:25:49.244218 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 04:25:49.253943 kernel: BTRFS: device fsid c26d2db4-0109-42a5-bc6f-bbb834b82868 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (680) Sep 4 04:25:49.254038 kernel: BTRFS info (device dm-0): first mount of filesystem c26d2db4-0109-42a5-bc6f-bbb834b82868 Sep 4 04:25:49.254058 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:25:49.261007 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 04:25:49.261074 kernel: BTRFS info (device dm-0): enabling free space tree Sep 4 04:25:49.263239 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 04:25:49.264535 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 4 04:25:49.265718 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 04:25:49.267163 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 04:25:49.270125 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 04:25:49.300990 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Sep 4 04:25:49.303982 kernel: BTRFS info (device vda6): first mount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:25:49.304014 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:25:49.307221 kernel: BTRFS info (device vda6): turning on async discard Sep 4 04:25:49.307257 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 04:25:49.313997 kernel: BTRFS info (device vda6): last unmount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:25:49.314629 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 04:25:49.318489 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 04:25:49.420559 ignition[754]: Ignition 2.22.0 Sep 4 04:25:49.420581 ignition[754]: Stage: fetch-offline Sep 4 04:25:49.420636 ignition[754]: no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:49.420663 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:49.420782 ignition[754]: parsed url from cmdline: "" Sep 4 04:25:49.420788 ignition[754]: no config URL provided Sep 4 04:25:49.420795 ignition[754]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 04:25:49.420812 ignition[754]: no config at "/usr/lib/ignition/user.ign" Sep 4 04:25:49.420844 ignition[754]: op(1): [started] loading QEMU firmware config module Sep 4 04:25:49.420865 ignition[754]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 04:25:49.432229 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 04:25:49.463825 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 04:25:49.465070 ignition[754]: op(1): [finished] loading QEMU firmware config module Sep 4 04:25:49.510449 systemd-networkd[859]: lo: Link UP Sep 4 04:25:49.510459 systemd-networkd[859]: lo: Gained carrier Sep 4 04:25:49.512057 systemd-networkd[859]: Enumeration completed Sep 4 04:25:49.512479 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:25:49.512565 ignition[754]: parsing config with SHA512: 8cdedeae48fb7b158dd1889c1f7768950e3f61b5839c682948ddc707979848c5e78c611c1ac947d10535bc81570f8ce391dbfddbecdb6dad72efaadad88bc421 Sep 4 04:25:49.512483 systemd-networkd[859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 04:25:49.514122 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 04:25:49.514556 systemd[1]: Reached target network.target - Network. Sep 4 04:25:49.515336 systemd-networkd[859]: eth0: Link UP Sep 4 04:25:49.515502 systemd-networkd[859]: eth0: Gained carrier Sep 4 04:25:49.521573 ignition[754]: fetch-offline: fetch-offline passed Sep 4 04:25:49.515512 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:25:49.521638 ignition[754]: Ignition finished successfully Sep 4 04:25:49.521013 unknown[754]: fetched base config from "system" Sep 4 04:25:49.521022 unknown[754]: fetched user config from "qemu" Sep 4 04:25:49.525303 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 04:25:49.526725 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 04:25:49.527762 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 04:25:49.547048 systemd-networkd[859]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 04:25:49.572216 ignition[863]: Ignition 2.22.0 Sep 4 04:25:49.572230 ignition[863]: Stage: kargs Sep 4 04:25:49.572382 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:49.572392 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:49.573428 ignition[863]: kargs: kargs passed Sep 4 04:25:49.573477 ignition[863]: Ignition finished successfully Sep 4 04:25:49.580007 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 04:25:49.582310 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 04:25:49.621901 ignition[872]: Ignition 2.22.0 Sep 4 04:25:49.621917 ignition[872]: Stage: disks Sep 4 04:25:49.622100 ignition[872]: no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:49.622114 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:49.622924 ignition[872]: disks: disks passed Sep 4 04:25:49.622996 ignition[872]: Ignition finished successfully Sep 4 04:25:49.630004 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 04:25:49.632325 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 04:25:49.632671 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 04:25:49.633233 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 04:25:49.633567 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 04:25:49.633917 systemd[1]: Reached target basic.target - Basic System. Sep 4 04:25:49.635792 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 04:25:49.674484 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 4 04:25:49.875827 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 04:25:49.880389 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 04:25:50.006992 kernel: EXT4-fs (vda9): mounted filesystem d147a273-ffc0-4c78-a5f1-46a3b3f6b4ff r/w with ordered data mode. Quota mode: none. Sep 4 04:25:50.007681 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 04:25:50.008800 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 04:25:50.011278 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 04:25:50.014049 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 04:25:50.014830 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 04:25:50.014905 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 04:25:50.014936 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 04:25:50.033830 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 04:25:50.036890 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 04:25:50.042528 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (890) Sep 4 04:25:50.042554 kernel: BTRFS info (device vda6): first mount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:25:50.042565 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:25:50.045675 kernel: BTRFS info (device vda6): turning on async discard Sep 4 04:25:50.045738 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 04:25:50.048448 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 04:25:50.082425 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 04:25:50.087403 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Sep 4 04:25:50.092780 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 04:25:50.097538 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 04:25:50.200709 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 04:25:50.203360 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 04:25:50.205436 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 04:25:50.232980 kernel: BTRFS info (device vda6): last unmount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:25:50.255136 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 04:25:50.258414 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 04:25:50.272928 ignition[1003]: INFO : Ignition 2.22.0 Sep 4 04:25:50.272928 ignition[1003]: INFO : Stage: mount Sep 4 04:25:50.274789 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:50.274789 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:50.274789 ignition[1003]: INFO : mount: mount passed Sep 4 04:25:50.274789 ignition[1003]: INFO : Ignition finished successfully Sep 4 04:25:50.276986 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 04:25:50.279976 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 04:25:50.312445 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 04:25:50.346985 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1016) Sep 4 04:25:50.348983 kernel: BTRFS info (device vda6): first mount of filesystem 1535a26e-7205-4f17-83f6-e5f828340771 Sep 4 04:25:50.348997 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 4 04:25:50.351987 kernel: BTRFS info (device vda6): turning on async discard Sep 4 04:25:50.352012 kernel: BTRFS info (device vda6): enabling free space tree Sep 4 04:25:50.353926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 04:25:50.403427 ignition[1033]: INFO : Ignition 2.22.0 Sep 4 04:25:50.403427 ignition[1033]: INFO : Stage: files Sep 4 04:25:50.405267 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:50.405267 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:50.407715 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Sep 4 04:25:50.409027 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 04:25:50.409027 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 04:25:50.412007 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 04:25:50.413475 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 04:25:50.415255 unknown[1033]: wrote ssh authorized keys file for user: core Sep 4 04:25:50.416622 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 04:25:50.418397 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 4 04:25:50.418397 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 4 04:25:50.451241 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 04:25:50.670626 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 4 04:25:50.670626 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 04:25:50.675154 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 4 04:25:50.714207 systemd-networkd[859]: eth0: Gained IPv6LL Sep 4 04:25:50.797667 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 04:25:51.101285 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 4 04:25:51.101285 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 04:25:51.105232 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 04:25:51.105232 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 04:25:51.105232 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 04:25:51.105232 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 04:25:51.105232 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 04:25:51.105232 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 04:25:51.105232 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 04:25:51.117990 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 04:25:51.117990 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 04:25:51.117990 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 04:25:51.117990 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 04:25:51.117990 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 04:25:51.117990 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 4 04:25:51.476461 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 04:25:52.134390 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 4 04:25:52.134390 ignition[1033]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 4 04:25:52.138689 ignition[1033]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 04:25:52.146205 ignition[1033]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 04:25:52.146205 ignition[1033]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 4 04:25:52.146205 ignition[1033]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 4 04:25:52.152183 ignition[1033]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 04:25:52.152183 ignition[1033]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 04:25:52.152183 ignition[1033]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 4 04:25:52.152183 ignition[1033]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 04:25:52.177031 ignition[1033]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 04:25:52.370456 ignition[1033]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 04:25:52.372321 ignition[1033]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 04:25:52.372321 ignition[1033]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 4 04:25:52.372321 ignition[1033]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 04:25:52.372321 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 04:25:52.372321 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 04:25:52.372321 ignition[1033]: INFO : files: files passed Sep 4 04:25:52.372321 ignition[1033]: INFO : Ignition finished successfully Sep 4 04:25:52.384953 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 04:25:52.387288 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 04:25:52.390318 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 04:25:52.422664 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 04:25:52.422822 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 04:25:52.425953 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 04:25:52.429904 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 04:25:52.429904 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 04:25:52.433797 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 04:25:52.433447 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 04:25:52.434597 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 04:25:52.437970 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 04:25:52.522553 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 04:25:52.522745 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 04:25:52.523695 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 04:25:52.528116 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 04:25:52.530267 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 04:25:52.532835 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 04:25:52.580500 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 04:25:52.584517 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 04:25:52.613649 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 04:25:52.616520 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 04:25:52.619429 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 04:25:52.621365 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 04:25:52.622576 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 04:25:52.625257 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 04:25:52.627328 systemd[1]: Stopped target basic.target - Basic System. Sep 4 04:25:52.629191 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 04:25:52.631471 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 04:25:52.634045 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 04:25:52.636492 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 4 04:25:52.638752 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 04:25:52.640952 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 04:25:52.643570 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 04:25:52.645915 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 04:25:52.648091 systemd[1]: Stopped target swap.target - Swaps. Sep 4 04:25:52.649730 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 04:25:52.650904 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 04:25:52.653525 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 04:25:52.655911 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 04:25:52.658533 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 04:25:52.659744 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 04:25:52.662748 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 04:25:52.662991 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 04:25:52.665782 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 04:25:52.665930 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 04:25:52.666590 systemd[1]: Stopped target paths.target - Path Units. Sep 4 04:25:52.669334 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 04:25:52.674097 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 04:25:52.674654 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 04:25:52.677307 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 04:25:52.677628 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 04:25:52.677729 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 04:25:52.680476 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 04:25:52.680566 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 04:25:52.682311 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 04:25:52.682467 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 04:25:52.683930 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 04:25:52.684062 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 04:25:52.688431 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 04:25:52.688865 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 04:25:52.689025 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 04:25:52.692409 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 04:25:52.695063 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 04:25:52.695186 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 04:25:52.696412 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 04:25:52.696513 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 04:25:52.702929 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 04:25:52.710122 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 04:25:52.733821 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 04:25:52.764200 ignition[1088]: INFO : Ignition 2.22.0 Sep 4 04:25:52.764200 ignition[1088]: INFO : Stage: umount Sep 4 04:25:52.767160 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 04:25:52.767160 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 04:25:52.767160 ignition[1088]: INFO : umount: umount passed Sep 4 04:25:52.767160 ignition[1088]: INFO : Ignition finished successfully Sep 4 04:25:52.769939 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 04:25:52.770137 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 04:25:52.770722 systemd[1]: Stopped target network.target - Network. Sep 4 04:25:52.772871 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 04:25:52.772937 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 04:25:52.773688 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 04:25:52.773744 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 04:25:52.774467 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 04:25:52.774528 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 04:25:52.778258 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 04:25:52.778310 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 04:25:52.778667 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 04:25:52.782434 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 04:25:52.790086 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 04:25:52.790207 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 04:25:52.800797 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 4 04:25:52.801080 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 04:25:52.801210 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 04:25:52.804947 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 4 04:25:52.806235 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 4 04:25:52.811404 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 04:25:52.811454 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 04:25:52.817553 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 04:25:52.819468 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 04:25:52.820603 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 04:25:52.823493 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 04:25:52.823573 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:25:52.826466 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 04:25:52.826533 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 04:25:52.827282 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 04:25:52.827338 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 04:25:52.832284 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 04:25:52.836584 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 04:25:52.836688 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 4 04:25:52.859243 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 04:25:52.859401 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 04:25:52.863164 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 04:25:52.863409 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 04:25:52.867094 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 04:25:52.867180 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 04:25:52.867829 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 04:25:52.867874 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 04:25:52.868459 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 04:25:52.868513 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 04:25:52.874130 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 04:25:52.874232 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 04:25:52.891521 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 04:25:52.891623 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 04:25:52.896722 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 04:25:52.897221 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 4 04:25:52.897279 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 04:25:52.901355 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 04:25:52.901415 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 04:25:52.905148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 04:25:52.905199 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:52.910074 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 4 04:25:52.910147 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 4 04:25:52.910199 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 4 04:25:52.963256 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 04:25:52.963442 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 04:25:53.293349 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 04:25:53.293551 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 04:25:53.294823 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 04:25:53.298372 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 04:25:53.298482 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 04:25:53.300384 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 04:25:53.330380 systemd[1]: Switching root. Sep 4 04:25:53.364593 systemd-journald[219]: Journal stopped Sep 4 04:25:55.371661 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Sep 4 04:25:55.371744 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 04:25:55.371769 kernel: SELinux: policy capability open_perms=1 Sep 4 04:25:55.371783 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 04:25:55.371794 kernel: SELinux: policy capability always_check_network=0 Sep 4 04:25:55.372046 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 04:25:55.372062 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 04:25:55.372074 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 04:25:55.372086 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 04:25:55.372097 kernel: SELinux: policy capability userspace_initial_context=0 Sep 4 04:25:55.372119 kernel: audit: type=1403 audit(1756959954.102:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 04:25:55.372137 systemd[1]: Successfully loaded SELinux policy in 68.112ms. Sep 4 04:25:55.372155 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.778ms. Sep 4 04:25:55.372169 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 4 04:25:55.372182 systemd[1]: Detected virtualization kvm. Sep 4 04:25:55.372194 systemd[1]: Detected architecture x86-64. Sep 4 04:25:55.372206 systemd[1]: Detected first boot. Sep 4 04:25:55.372218 systemd[1]: Initializing machine ID from VM UUID. Sep 4 04:25:55.372237 zram_generator::config[1134]: No configuration found. Sep 4 04:25:55.372250 kernel: Guest personality initialized and is inactive Sep 4 04:25:55.372262 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 4 04:25:55.372277 kernel: Initialized host personality Sep 4 04:25:55.373019 kernel: NET: Registered PF_VSOCK protocol family Sep 4 04:25:55.373043 systemd[1]: Populated /etc with preset unit settings. Sep 4 04:25:55.373058 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 4 04:25:55.373375 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 04:25:55.373390 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 04:25:55.373403 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 04:25:55.373415 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 04:25:55.373432 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 04:25:55.373444 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 04:25:55.373457 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 04:25:55.373469 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 04:25:55.373482 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 04:25:55.373495 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 04:25:55.373507 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 04:25:55.373520 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 04:25:55.373533 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 04:25:55.373548 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 04:25:55.373574 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 04:25:55.373587 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 04:25:55.373600 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 04:25:55.373612 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 04:25:55.373624 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 04:25:55.373637 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 04:25:55.373649 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 04:25:55.373664 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 04:25:55.373679 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 04:25:55.373691 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 04:25:55.373731 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 04:25:55.373745 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 04:25:55.373757 systemd[1]: Reached target slices.target - Slice Units. Sep 4 04:25:55.373769 systemd[1]: Reached target swap.target - Swaps. Sep 4 04:25:55.373781 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 04:25:55.373794 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 04:25:55.373809 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 4 04:25:55.373821 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 04:25:55.373834 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 04:25:55.373846 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 04:25:55.373860 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 04:25:55.373887 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 04:25:55.373904 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 04:25:55.373921 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 04:25:55.373937 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:55.373985 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 04:25:55.373998 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 04:25:55.374011 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 04:25:55.374024 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 04:25:55.374037 systemd[1]: Reached target machines.target - Containers. Sep 4 04:25:55.374049 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 04:25:55.374062 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:25:55.374074 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 04:25:55.374089 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 04:25:55.374102 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:25:55.374114 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 04:25:55.374126 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 04:25:55.374144 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 04:25:55.374156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 04:25:55.374169 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 04:25:55.374181 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 04:25:55.374193 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 04:25:55.374208 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 04:25:55.374220 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 04:25:55.374234 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:25:55.374246 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 04:25:55.374258 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 04:25:55.374270 kernel: loop: module loaded Sep 4 04:25:55.374283 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 04:25:55.374294 kernel: fuse: init (API version 7.41) Sep 4 04:25:55.374306 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 04:25:55.374321 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 4 04:25:55.374333 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 04:25:55.374345 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 04:25:55.374357 systemd[1]: Stopped verity-setup.service. Sep 4 04:25:55.374398 systemd-journald[1198]: Collecting audit messages is disabled. Sep 4 04:25:55.374433 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:55.374446 systemd-journald[1198]: Journal started Sep 4 04:25:55.374469 systemd-journald[1198]: Runtime Journal (/run/log/journal/0a86a9de915642d395f96295f43e7a60) is 6M, max 48.6M, 42.5M free. Sep 4 04:25:55.382074 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 04:25:55.382110 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 04:25:54.781078 systemd[1]: Queued start job for default target multi-user.target. Sep 4 04:25:54.809300 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 04:25:54.810005 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 04:25:55.385077 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 04:25:55.386547 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 04:25:55.387793 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 04:25:55.389273 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 04:25:55.390935 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 04:25:55.392806 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 04:25:55.396020 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 04:25:55.396412 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 04:25:55.427023 kernel: ACPI: bus type drm_connector registered Sep 4 04:25:55.428833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:25:55.429143 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:25:55.430747 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 04:25:55.430990 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 04:25:55.432529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 04:25:55.432767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 04:25:55.435063 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 04:25:55.435336 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 04:25:55.436911 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 04:25:55.437236 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 04:25:55.439346 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 04:25:55.441401 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 04:25:55.443273 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 04:25:55.445162 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 4 04:25:55.461736 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 04:25:55.465105 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 04:25:55.467631 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 04:25:55.469070 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 04:25:55.469189 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 04:25:55.471709 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 4 04:25:55.482364 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 04:25:55.535756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:25:55.542401 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 04:25:55.545100 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 04:25:55.546501 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 04:25:55.555817 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 04:25:55.601078 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 04:25:55.604388 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 04:25:55.609354 systemd-journald[1198]: Time spent on flushing to /var/log/journal/0a86a9de915642d395f96295f43e7a60 is 42.049ms for 984 entries. Sep 4 04:25:55.609354 systemd-journald[1198]: System Journal (/var/log/journal/0a86a9de915642d395f96295f43e7a60) is 8M, max 195.6M, 187.6M free. Sep 4 04:25:55.835210 systemd-journald[1198]: Received client request to flush runtime journal. Sep 4 04:25:55.835258 kernel: loop0: detected capacity change from 0 to 128016 Sep 4 04:25:55.835294 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 04:25:55.835308 kernel: loop1: detected capacity change from 0 to 110984 Sep 4 04:25:55.609562 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 04:25:55.621251 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 04:25:55.623389 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 04:25:55.677875 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 04:25:55.707868 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:25:55.797300 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 04:25:55.800163 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 04:25:55.802883 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 04:25:55.808118 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 4 04:25:55.812084 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 04:25:55.838590 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 04:25:55.856998 kernel: loop2: detected capacity change from 0 to 229808 Sep 4 04:25:56.415998 kernel: loop3: detected capacity change from 0 to 128016 Sep 4 04:25:56.477000 kernel: loop4: detected capacity change from 0 to 110984 Sep 4 04:25:56.481773 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 04:25:56.484811 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 04:25:56.529018 kernel: loop5: detected capacity change from 0 to 229808 Sep 4 04:25:56.549236 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 4 04:25:56.549260 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Sep 4 04:25:56.550397 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 04:25:56.551411 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 4 04:25:56.553043 (sd-merge)[1271]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 04:25:56.553765 (sd-merge)[1271]: Merged extensions into '/usr'. Sep 4 04:25:56.557327 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 04:25:56.563078 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 04:25:56.563105 systemd[1]: Reloading... Sep 4 04:25:56.681015 zram_generator::config[1316]: No configuration found. Sep 4 04:25:56.792088 ldconfig[1240]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 04:25:56.893943 systemd[1]: Reloading finished in 330 ms. Sep 4 04:25:56.923217 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 04:25:56.925195 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 04:25:56.950335 systemd[1]: Starting ensure-sysext.service... Sep 4 04:25:56.953092 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 04:25:56.998051 systemd[1]: Reload requested from client PID 1339 ('systemctl') (unit ensure-sysext.service)... Sep 4 04:25:56.998073 systemd[1]: Reloading... Sep 4 04:25:57.005878 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 4 04:25:57.005932 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 4 04:25:57.006518 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 04:25:57.006868 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 04:25:57.008174 systemd-tmpfiles[1340]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 04:25:57.008541 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 4 04:25:57.008680 systemd-tmpfiles[1340]: ACLs are not supported, ignoring. Sep 4 04:25:57.013691 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 04:25:57.013710 systemd-tmpfiles[1340]: Skipping /boot Sep 4 04:25:57.025855 systemd-tmpfiles[1340]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 04:25:57.025871 systemd-tmpfiles[1340]: Skipping /boot Sep 4 04:25:57.096004 zram_generator::config[1370]: No configuration found. Sep 4 04:25:57.291137 systemd[1]: Reloading finished in 292 ms. Sep 4 04:25:57.317533 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 04:25:57.363459 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 04:25:57.377454 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 04:25:57.380820 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 04:25:57.383893 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 04:25:57.402211 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 04:25:57.406556 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 04:25:57.411493 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 04:25:57.418179 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:57.418399 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:25:57.424624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:25:57.430675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 04:25:57.435806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 04:25:57.437423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:25:57.437589 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:25:57.441406 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 04:25:57.442564 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:57.449240 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 04:25:57.452665 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:25:57.452892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:25:57.465429 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 04:25:57.465752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 04:25:57.468030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 04:25:57.468407 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 04:25:57.470875 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Sep 4 04:25:57.472591 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:57.472897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:25:57.475045 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:25:57.476623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:25:57.476871 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:25:57.477148 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 04:25:57.480938 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 04:25:57.482493 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:57.488268 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 04:25:57.493222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:25:57.493522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:25:57.503099 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:57.503386 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 04:25:57.509030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 04:25:57.511828 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 04:25:57.516575 augenrules[1444]: No rules Sep 4 04:25:57.517475 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 04:25:57.528205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 04:25:57.529533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 04:25:57.529695 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 4 04:25:57.529876 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 4 04:25:57.531577 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 04:25:57.534627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 04:25:57.575074 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 04:25:57.575387 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 04:25:57.578714 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 04:25:57.580486 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 04:25:57.582443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 04:25:57.582738 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 04:25:57.585937 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 04:25:57.587287 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 04:25:57.589328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 04:25:57.589636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 04:25:57.592111 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 04:25:57.592555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 04:25:57.611825 systemd[1]: Finished ensure-sysext.service. Sep 4 04:25:57.621989 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 04:25:57.624069 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 04:25:57.624151 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 04:25:57.626234 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 04:25:57.627358 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 04:25:57.699025 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 04:25:57.778009 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 04:25:57.788029 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 4 04:25:57.794080 kernel: ACPI: button: Power Button [PWRF] Sep 4 04:25:57.815214 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 04:25:57.820022 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 04:25:57.826010 systemd-resolved[1409]: Positive Trust Anchors: Sep 4 04:25:57.826031 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 04:25:57.826063 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 04:25:57.830004 systemd-resolved[1409]: Defaulting to hostname 'linux'. Sep 4 04:25:57.832430 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 04:25:57.833814 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 04:25:57.851106 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 04:25:57.856991 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 4 04:25:57.858993 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 4 04:25:57.888754 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 04:25:57.890391 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 04:25:57.890915 systemd-networkd[1493]: lo: Link UP Sep 4 04:25:57.890932 systemd-networkd[1493]: lo: Gained carrier Sep 4 04:25:57.891783 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 04:25:57.892772 systemd-networkd[1493]: Enumeration completed Sep 4 04:25:57.893332 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 04:25:57.895000 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 4 04:25:57.895346 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:25:57.895359 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 04:25:57.896227 systemd-networkd[1493]: eth0: Link UP Sep 4 04:25:57.896425 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 04:25:57.896551 systemd-networkd[1493]: eth0: Gained carrier Sep 4 04:25:57.896570 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 04:25:57.898315 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 04:25:57.898361 systemd[1]: Reached target paths.target - Path Units. Sep 4 04:25:57.899590 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 04:25:57.901414 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 04:25:57.903011 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 04:25:57.904431 systemd[1]: Reached target timers.target - Timer Units. Sep 4 04:25:57.906846 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 04:25:57.908229 systemd-networkd[1493]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 04:25:57.911219 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 04:25:57.915061 systemd-timesyncd[1494]: Network configuration changed, trying to establish connection. Sep 4 04:25:57.916773 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 4 04:25:57.919003 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 4 04:25:58.422478 systemd-resolved[1409]: Clock change detected. Flushing caches. Sep 4 04:25:58.422606 systemd-timesyncd[1494]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 04:25:58.422665 systemd-timesyncd[1494]: Initial clock synchronization to Thu 2025-09-04 04:25:58.422432 UTC. Sep 4 04:25:58.423940 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 4 04:25:58.449012 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 04:25:58.451440 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 4 04:25:58.453747 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 04:25:58.456294 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 04:25:58.458830 systemd[1]: Reached target network.target - Network. Sep 4 04:25:58.460954 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 04:25:58.462199 systemd[1]: Reached target basic.target - Basic System. Sep 4 04:25:58.463491 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 04:25:58.463546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 04:25:58.467322 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 04:25:58.470562 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 04:25:58.474190 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 04:25:58.489571 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 04:25:58.497997 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 04:25:58.499133 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 04:25:58.502098 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 4 04:25:58.506121 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 04:25:58.545771 jq[1530]: false Sep 4 04:25:58.546455 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 04:25:58.551820 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 04:25:58.554590 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Refreshing passwd entry cache Sep 4 04:25:58.554536 oslogin_cache_refresh[1532]: Refreshing passwd entry cache Sep 4 04:25:58.558063 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 04:25:58.563976 oslogin_cache_refresh[1532]: Failure getting users, quitting Sep 4 04:25:58.565832 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Failure getting users, quitting Sep 4 04:25:58.565832 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 04:25:58.565832 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Refreshing group entry cache Sep 4 04:25:58.563995 oslogin_cache_refresh[1532]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 4 04:25:58.564052 oslogin_cache_refresh[1532]: Refreshing group entry cache Sep 4 04:25:58.566835 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 04:25:58.570052 extend-filesystems[1531]: Found /dev/vda6 Sep 4 04:25:58.573476 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Failure getting groups, quitting Sep 4 04:25:58.573476 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 04:25:58.572491 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 4 04:25:58.572187 oslogin_cache_refresh[1532]: Failure getting groups, quitting Sep 4 04:25:58.572200 oslogin_cache_refresh[1532]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 4 04:25:58.574649 extend-filesystems[1531]: Found /dev/vda9 Sep 4 04:25:58.578114 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 04:25:58.582804 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 04:25:58.583103 extend-filesystems[1531]: Checking size of /dev/vda9 Sep 4 04:25:58.583456 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 04:25:58.584779 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 04:25:58.587597 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 04:25:58.597685 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 04:25:58.601069 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 04:25:58.601838 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 04:25:58.602244 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 4 04:25:58.602931 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 4 04:25:58.615842 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 04:25:58.616830 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 04:25:58.619547 jq[1551]: true Sep 4 04:25:58.623369 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 04:25:58.624387 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 04:25:58.625905 extend-filesystems[1531]: Resized partition /dev/vda9 Sep 4 04:25:58.648499 update_engine[1549]: I20250904 04:25:58.643659 1549 main.cc:92] Flatcar Update Engine starting Sep 4 04:25:58.648978 extend-filesystems[1568]: resize2fs 1.47.3 (8-Jul-2025) Sep 4 04:25:58.654074 kernel: kvm_amd: TSC scaling supported Sep 4 04:25:58.654100 kernel: kvm_amd: Nested Virtualization enabled Sep 4 04:25:58.654130 kernel: kvm_amd: Nested Paging enabled Sep 4 04:25:58.654151 kernel: kvm_amd: LBR virtualization supported Sep 4 04:25:58.654165 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 4 04:25:58.657501 kernel: kvm_amd: Virtual GIF supported Sep 4 04:25:58.655491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 04:25:58.659595 (ntainerd)[1569]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 04:25:58.668927 jq[1566]: true Sep 4 04:25:58.672898 tar[1556]: linux-amd64/LICENSE Sep 4 04:25:58.678890 tar[1556]: linux-amd64/helm Sep 4 04:25:58.742349 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 4 04:25:58.871904 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 04:25:58.874379 systemd-logind[1541]: Watching system buttons on /dev/input/event2 (Power Button) Sep 4 04:25:58.874416 systemd-logind[1541]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 04:25:58.876649 systemd-logind[1541]: New seat seat0. Sep 4 04:25:58.878459 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 04:25:59.025076 tar[1556]: linux-amd64/README.md Sep 4 04:25:59.057183 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 04:25:59.116924 kernel: EDAC MC: Ver: 3.0.0 Sep 4 04:25:59.118187 dbus-daemon[1528]: [system] SELinux support is enabled Sep 4 04:25:59.118409 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 04:25:59.130447 update_engine[1549]: I20250904 04:25:59.129092 1549 update_check_scheduler.cc:74] Next update check in 6m58s Sep 4 04:25:59.123705 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 04:25:59.123743 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 04:25:59.124332 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 04:25:59.124353 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 04:25:59.130942 systemd[1]: Started update-engine.service - Update Engine. Sep 4 04:25:59.136478 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 04:25:59.142125 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 04:25:59.180022 sshd_keygen[1567]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 04:25:59.193900 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 04:25:59.228456 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 04:25:59.246301 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 04:25:59.248022 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 04:25:59.293909 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 04:25:59.294200 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 04:25:59.299612 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 04:25:59.348119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 04:25:59.401192 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 04:25:59.405074 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 04:25:59.458617 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 04:25:59.458617 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 04:25:59.458617 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 04:25:59.408307 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 04:25:59.465483 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Sep 4 04:25:59.409902 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 04:25:59.463621 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 04:25:59.464723 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 04:25:59.472007 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Sep 4 04:25:59.473529 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 04:25:59.477029 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 04:25:59.522164 containerd[1569]: time="2025-09-04T04:25:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 4 04:25:59.523223 containerd[1569]: time="2025-09-04T04:25:59.523166376Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 4 04:25:59.535904 containerd[1569]: time="2025-09-04T04:25:59.535813159Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.557µs" Sep 4 04:25:59.535904 containerd[1569]: time="2025-09-04T04:25:59.535881457Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 4 04:25:59.535904 containerd[1569]: time="2025-09-04T04:25:59.535911984Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 4 04:25:59.536213 containerd[1569]: time="2025-09-04T04:25:59.536181569Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 4 04:25:59.536213 containerd[1569]: time="2025-09-04T04:25:59.536206827Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 4 04:25:59.536294 containerd[1569]: time="2025-09-04T04:25:59.536239288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 04:25:59.536363 containerd[1569]: time="2025-09-04T04:25:59.536325079Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 4 04:25:59.536363 containerd[1569]: time="2025-09-04T04:25:59.536351067Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 04:25:59.536767 containerd[1569]: time="2025-09-04T04:25:59.536722914Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 4 04:25:59.536767 containerd[1569]: time="2025-09-04T04:25:59.536743433Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 04:25:59.536767 containerd[1569]: time="2025-09-04T04:25:59.536754894Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 4 04:25:59.536767 containerd[1569]: time="2025-09-04T04:25:59.536762749Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 4 04:25:59.536977 containerd[1569]: time="2025-09-04T04:25:59.536921317Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 4 04:25:59.537267 containerd[1569]: time="2025-09-04T04:25:59.537223122Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 04:25:59.537267 containerd[1569]: time="2025-09-04T04:25:59.537261655Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 4 04:25:59.537267 containerd[1569]: time="2025-09-04T04:25:59.537271443Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 4 04:25:59.537361 containerd[1569]: time="2025-09-04T04:25:59.537304786Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 4 04:25:59.537641 containerd[1569]: time="2025-09-04T04:25:59.537590461Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 4 04:25:59.537706 containerd[1569]: time="2025-09-04T04:25:59.537694486Z" level=info msg="metadata content store policy set" policy=shared Sep 4 04:25:59.549150 containerd[1569]: time="2025-09-04T04:25:59.549084002Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 4 04:25:59.549150 containerd[1569]: time="2025-09-04T04:25:59.549141710Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 4 04:25:59.549150 containerd[1569]: time="2025-09-04T04:25:59.549156618Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 4 04:25:59.549270 containerd[1569]: time="2025-09-04T04:25:59.549169422Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 4 04:25:59.549270 containerd[1569]: time="2025-09-04T04:25:59.549181254Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 4 04:25:59.549270 containerd[1569]: time="2025-09-04T04:25:59.549193076Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 4 04:25:59.549270 containerd[1569]: time="2025-09-04T04:25:59.549205159Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 4 04:25:59.549270 containerd[1569]: time="2025-09-04T04:25:59.549216109Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 4 04:25:59.549270 containerd[1569]: time="2025-09-04T04:25:59.549225767Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 4 04:25:59.549270 containerd[1569]: time="2025-09-04T04:25:59.549235486Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 4 04:25:59.549270 containerd[1569]: time="2025-09-04T04:25:59.549244623Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 4 04:25:59.549270 containerd[1569]: time="2025-09-04T04:25:59.549267185Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 4 04:25:59.549470 containerd[1569]: time="2025-09-04T04:25:59.549424680Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 4 04:25:59.549470 containerd[1569]: time="2025-09-04T04:25:59.549448194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 4 04:25:59.549533 containerd[1569]: time="2025-09-04T04:25:59.549465126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 4 04:25:59.549533 containerd[1569]: time="2025-09-04T04:25:59.549487238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 4 04:25:59.549581 containerd[1569]: time="2025-09-04T04:25:59.549533294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 4 04:25:59.549581 containerd[1569]: time="2025-09-04T04:25:59.549546238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 4 04:25:59.549630 containerd[1569]: time="2025-09-04T04:25:59.549578198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 4 04:25:59.549630 containerd[1569]: time="2025-09-04T04:25:59.549593226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 4 04:25:59.549630 containerd[1569]: time="2025-09-04T04:25:59.549607253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 4 04:25:59.549630 containerd[1569]: time="2025-09-04T04:25:59.549621549Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 4 04:25:59.549718 containerd[1569]: time="2025-09-04T04:25:59.549635676Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 4 04:25:59.549762 containerd[1569]: time="2025-09-04T04:25:59.549728881Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 4 04:25:59.549762 containerd[1569]: time="2025-09-04T04:25:59.549754479Z" level=info msg="Start snapshots syncer" Sep 4 04:25:59.549815 containerd[1569]: time="2025-09-04T04:25:59.549788523Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 4 04:25:59.550197 containerd[1569]: time="2025-09-04T04:25:59.550126556Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 4 04:25:59.550326 containerd[1569]: time="2025-09-04T04:25:59.550205865Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 4 04:25:59.550326 containerd[1569]: time="2025-09-04T04:25:59.550280134Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 4 04:25:59.550436 containerd[1569]: time="2025-09-04T04:25:59.550411791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 4 04:25:59.550469 containerd[1569]: time="2025-09-04T04:25:59.550438351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 4 04:25:59.550496 containerd[1569]: time="2025-09-04T04:25:59.550466343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 4 04:25:59.550496 containerd[1569]: time="2025-09-04T04:25:59.550480169Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 4 04:25:59.550496 containerd[1569]: time="2025-09-04T04:25:59.550491450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 4 04:25:59.550593 containerd[1569]: time="2025-09-04T04:25:59.550501219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 4 04:25:59.550593 containerd[1569]: time="2025-09-04T04:25:59.550544550Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 4 04:25:59.550593 containerd[1569]: time="2025-09-04T04:25:59.550570960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 4 04:25:59.550593 containerd[1569]: time="2025-09-04T04:25:59.550586318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 4 04:25:59.550686 containerd[1569]: time="2025-09-04T04:25:59.550597860Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 4 04:25:59.550686 containerd[1569]: time="2025-09-04T04:25:59.550638406Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 04:25:59.550686 containerd[1569]: time="2025-09-04T04:25:59.550664355Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 4 04:25:59.550686 containerd[1569]: time="2025-09-04T04:25:59.550678682Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 04:25:59.550787 containerd[1569]: time="2025-09-04T04:25:59.550692237Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 4 04:25:59.550787 containerd[1569]: time="2025-09-04T04:25:59.550702126Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 4 04:25:59.550787 containerd[1569]: time="2025-09-04T04:25:59.550711764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 4 04:25:59.550787 containerd[1569]: time="2025-09-04T04:25:59.550762779Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 4 04:25:59.550911 containerd[1569]: time="2025-09-04T04:25:59.550789169Z" level=info msg="runtime interface created" Sep 4 04:25:59.550911 containerd[1569]: time="2025-09-04T04:25:59.550796613Z" level=info msg="created NRI interface" Sep 4 04:25:59.550911 containerd[1569]: time="2025-09-04T04:25:59.550807804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 4 04:25:59.550911 containerd[1569]: time="2025-09-04T04:25:59.550821640Z" level=info msg="Connect containerd service" Sep 4 04:25:59.550911 containerd[1569]: time="2025-09-04T04:25:59.550870641Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 04:25:59.551832 containerd[1569]: time="2025-09-04T04:25:59.551799493Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 04:25:59.724898 containerd[1569]: time="2025-09-04T04:25:59.724523390Z" level=info msg="Start subscribing containerd event" Sep 4 04:25:59.725180 containerd[1569]: time="2025-09-04T04:25:59.725082649Z" level=info msg="Start recovering state" Sep 4 04:25:59.725243 containerd[1569]: time="2025-09-04T04:25:59.725014211Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 04:25:59.725327 containerd[1569]: time="2025-09-04T04:25:59.725299465Z" level=info msg="Start event monitor" Sep 4 04:25:59.725366 containerd[1569]: time="2025-09-04T04:25:59.725326747Z" level=info msg="Start cni network conf syncer for default" Sep 4 04:25:59.725366 containerd[1569]: time="2025-09-04T04:25:59.725345402Z" level=info msg="Start streaming server" Sep 4 04:25:59.725445 containerd[1569]: time="2025-09-04T04:25:59.725379345Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 4 04:25:59.725445 containerd[1569]: time="2025-09-04T04:25:59.725392129Z" level=info msg="runtime interface starting up..." Sep 4 04:25:59.725445 containerd[1569]: time="2025-09-04T04:25:59.725402298Z" level=info msg="starting plugins..." Sep 4 04:25:59.725445 containerd[1569]: time="2025-09-04T04:25:59.725431854Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 4 04:25:59.725605 containerd[1569]: time="2025-09-04T04:25:59.725356312Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 04:25:59.726248 containerd[1569]: time="2025-09-04T04:25:59.725685710Z" level=info msg="containerd successfully booted in 0.204235s" Sep 4 04:25:59.725889 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 04:25:59.857180 systemd-networkd[1493]: eth0: Gained IPv6LL Sep 4 04:25:59.861444 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 04:25:59.863673 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 04:25:59.866847 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 04:25:59.869833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:25:59.895830 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 04:25:59.925423 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 04:25:59.925755 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 04:25:59.927638 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 04:25:59.931176 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 04:26:00.193774 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 04:26:00.198234 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Sep 4 04:26:00.323066 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:26:00.325399 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:26:00.335231 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 04:26:00.338361 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 04:26:00.347710 systemd-logind[1541]: New session 1 of user core. Sep 4 04:26:00.365747 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 04:26:00.371714 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 04:26:00.426633 (systemd)[1675]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 04:26:00.430069 systemd-logind[1541]: New session c1 of user core. Sep 4 04:26:00.631949 systemd[1675]: Queued start job for default target default.target. Sep 4 04:26:00.655892 systemd[1675]: Created slice app.slice - User Application Slice. Sep 4 04:26:00.655928 systemd[1675]: Reached target paths.target - Paths. Sep 4 04:26:00.655986 systemd[1675]: Reached target timers.target - Timers. Sep 4 04:26:00.658628 systemd[1675]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 04:26:00.674190 systemd[1675]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 04:26:00.674392 systemd[1675]: Reached target sockets.target - Sockets. Sep 4 04:26:00.674766 systemd[1675]: Reached target basic.target - Basic System. Sep 4 04:26:00.674844 systemd[1675]: Reached target default.target - Main User Target. Sep 4 04:26:00.674933 systemd[1675]: Startup finished in 233ms. Sep 4 04:26:00.675058 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 04:26:00.690192 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 04:26:00.761581 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:50608.service - OpenSSH per-connection server daemon (10.0.0.1:50608). Sep 4 04:26:00.875715 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 50608 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:26:00.877541 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:26:00.884926 systemd-logind[1541]: New session 2 of user core. Sep 4 04:26:00.895107 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 04:26:00.955835 sshd[1689]: Connection closed by 10.0.0.1 port 50608 Sep 4 04:26:00.956553 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Sep 4 04:26:00.969875 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:50608.service: Deactivated successfully. Sep 4 04:26:00.972846 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 04:26:00.973638 systemd-logind[1541]: Session 2 logged out. Waiting for processes to exit. Sep 4 04:26:00.977288 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:50620.service - OpenSSH per-connection server daemon (10.0.0.1:50620). Sep 4 04:26:00.979715 systemd-logind[1541]: Removed session 2. Sep 4 04:26:01.056021 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 50620 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:26:01.057772 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:26:01.063671 systemd-logind[1541]: New session 3 of user core. Sep 4 04:26:01.079054 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 04:26:01.139122 sshd[1698]: Connection closed by 10.0.0.1 port 50620 Sep 4 04:26:01.140251 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Sep 4 04:26:01.146559 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:50620.service: Deactivated successfully. Sep 4 04:26:01.148773 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 04:26:01.149784 systemd-logind[1541]: Session 3 logged out. Waiting for processes to exit. Sep 4 04:26:01.151534 systemd-logind[1541]: Removed session 3. Sep 4 04:26:01.264442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:01.266602 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 04:26:01.268049 systemd[1]: Startup finished in 3.770s (kernel) + 8.409s (initrd) + 6.728s (userspace) = 18.908s. Sep 4 04:26:01.293586 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:26:02.291893 kubelet[1708]: E0904 04:26:02.291760 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:26:02.297071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:26:02.297279 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:26:02.297779 systemd[1]: kubelet.service: Consumed 2.201s CPU time, 268.3M memory peak. Sep 4 04:26:11.151841 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:41316.service - OpenSSH per-connection server daemon (10.0.0.1:41316). Sep 4 04:26:11.208982 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 41316 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:26:11.210461 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:26:11.214791 systemd-logind[1541]: New session 4 of user core. Sep 4 04:26:11.224989 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 04:26:11.280224 sshd[1724]: Connection closed by 10.0.0.1 port 41316 Sep 4 04:26:11.280573 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Sep 4 04:26:11.293779 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:41316.service: Deactivated successfully. Sep 4 04:26:11.295884 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 04:26:11.296689 systemd-logind[1541]: Session 4 logged out. Waiting for processes to exit. Sep 4 04:26:11.299596 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:41320.service - OpenSSH per-connection server daemon (10.0.0.1:41320). Sep 4 04:26:11.300676 systemd-logind[1541]: Removed session 4. Sep 4 04:26:11.363891 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 41320 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:26:11.365291 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:26:11.370779 systemd-logind[1541]: New session 5 of user core. Sep 4 04:26:11.378104 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 04:26:11.429929 sshd[1733]: Connection closed by 10.0.0.1 port 41320 Sep 4 04:26:11.430119 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 4 04:26:11.441474 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:41320.service: Deactivated successfully. Sep 4 04:26:11.443952 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 04:26:11.445048 systemd-logind[1541]: Session 5 logged out. Waiting for processes to exit. Sep 4 04:26:11.448566 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:41324.service - OpenSSH per-connection server daemon (10.0.0.1:41324). Sep 4 04:26:11.449200 systemd-logind[1541]: Removed session 5. Sep 4 04:26:11.511850 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 41324 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:26:11.514091 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:26:11.522193 systemd-logind[1541]: New session 6 of user core. Sep 4 04:26:11.536355 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 04:26:11.596444 sshd[1742]: Connection closed by 10.0.0.1 port 41324 Sep 4 04:26:11.596755 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Sep 4 04:26:11.607239 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:41324.service: Deactivated successfully. Sep 4 04:26:11.609556 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 04:26:11.610557 systemd-logind[1541]: Session 6 logged out. Waiting for processes to exit. Sep 4 04:26:11.614952 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:41332.service - OpenSSH per-connection server daemon (10.0.0.1:41332). Sep 4 04:26:11.616578 systemd-logind[1541]: Removed session 6. Sep 4 04:26:11.674445 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 41332 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:26:11.676921 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:26:11.685092 systemd-logind[1541]: New session 7 of user core. Sep 4 04:26:11.706194 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 04:26:11.773708 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 04:26:11.774115 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:26:11.796010 sudo[1752]: pam_unix(sudo:session): session closed for user root Sep 4 04:26:11.798007 sshd[1751]: Connection closed by 10.0.0.1 port 41332 Sep 4 04:26:11.798557 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Sep 4 04:26:11.809381 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:41332.service: Deactivated successfully. Sep 4 04:26:11.811532 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 04:26:11.812642 systemd-logind[1541]: Session 7 logged out. Waiting for processes to exit. Sep 4 04:26:11.816143 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:41334.service - OpenSSH per-connection server daemon (10.0.0.1:41334). Sep 4 04:26:11.817130 systemd-logind[1541]: Removed session 7. Sep 4 04:26:11.870988 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 41334 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:26:11.872849 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:26:11.879157 systemd-logind[1541]: New session 8 of user core. Sep 4 04:26:11.889351 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 04:26:11.948560 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 04:26:11.948952 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:26:11.959738 sudo[1763]: pam_unix(sudo:session): session closed for user root Sep 4 04:26:11.968418 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 4 04:26:11.968796 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:26:11.981071 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 4 04:26:12.040135 augenrules[1785]: No rules Sep 4 04:26:12.042214 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 04:26:12.042546 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 4 04:26:12.043818 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 4 04:26:12.045877 sshd[1761]: Connection closed by 10.0.0.1 port 41334 Sep 4 04:26:12.046272 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Sep 4 04:26:12.063509 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:41334.service: Deactivated successfully. Sep 4 04:26:12.065831 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 04:26:12.066817 systemd-logind[1541]: Session 8 logged out. Waiting for processes to exit. Sep 4 04:26:12.070535 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:41336.service - OpenSSH per-connection server daemon (10.0.0.1:41336). Sep 4 04:26:12.071672 systemd-logind[1541]: Removed session 8. Sep 4 04:26:12.140774 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 41336 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:26:12.142961 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:26:12.150119 systemd-logind[1541]: New session 9 of user core. Sep 4 04:26:12.164138 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 04:26:12.221587 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 04:26:12.222061 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 04:26:12.469621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 04:26:12.471538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:12.726513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:12.800649 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:26:12.887879 kubelet[1825]: E0904 04:26:12.887764 1825 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:26:12.896051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:26:12.896358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:26:12.896814 systemd[1]: kubelet.service: Consumed 303ms CPU time, 111.1M memory peak. Sep 4 04:26:12.918805 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 04:26:12.934428 (dockerd)[1834]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 04:26:13.634980 dockerd[1834]: time="2025-09-04T04:26:13.634896556Z" level=info msg="Starting up" Sep 4 04:26:13.635847 dockerd[1834]: time="2025-09-04T04:26:13.635792616Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 4 04:26:13.660095 dockerd[1834]: time="2025-09-04T04:26:13.660025804Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 4 04:26:14.431389 dockerd[1834]: time="2025-09-04T04:26:14.431289272Z" level=info msg="Loading containers: start." Sep 4 04:26:14.445305 kernel: Initializing XFRM netlink socket Sep 4 04:26:15.013542 systemd-networkd[1493]: docker0: Link UP Sep 4 04:26:15.022529 dockerd[1834]: time="2025-09-04T04:26:15.022478041Z" level=info msg="Loading containers: done." Sep 4 04:26:15.043591 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1903992196-merged.mount: Deactivated successfully. Sep 4 04:26:15.046627 dockerd[1834]: time="2025-09-04T04:26:15.046567900Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 04:26:15.046707 dockerd[1834]: time="2025-09-04T04:26:15.046693365Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 4 04:26:15.046824 dockerd[1834]: time="2025-09-04T04:26:15.046800636Z" level=info msg="Initializing buildkit" Sep 4 04:26:15.121724 dockerd[1834]: time="2025-09-04T04:26:15.121642384Z" level=info msg="Completed buildkit initialization" Sep 4 04:26:15.129394 dockerd[1834]: time="2025-09-04T04:26:15.129312025Z" level=info msg="Daemon has completed initialization" Sep 4 04:26:15.129551 dockerd[1834]: time="2025-09-04T04:26:15.129426520Z" level=info msg="API listen on /run/docker.sock" Sep 4 04:26:15.130357 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 04:26:16.567223 containerd[1569]: time="2025-09-04T04:26:16.567157114Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 4 04:26:17.306739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1815052227.mount: Deactivated successfully. Sep 4 04:26:18.888787 containerd[1569]: time="2025-09-04T04:26:18.888668237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:18.890627 containerd[1569]: time="2025-09-04T04:26:18.890548763Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 4 04:26:18.892321 containerd[1569]: time="2025-09-04T04:26:18.892247448Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:18.896254 containerd[1569]: time="2025-09-04T04:26:18.896162038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:18.897597 containerd[1569]: time="2025-09-04T04:26:18.897534992Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 2.330308057s" Sep 4 04:26:18.897655 containerd[1569]: time="2025-09-04T04:26:18.897599493Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 4 04:26:18.899104 containerd[1569]: time="2025-09-04T04:26:18.899073817Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 4 04:26:20.916814 containerd[1569]: time="2025-09-04T04:26:20.916732988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:20.917712 containerd[1569]: time="2025-09-04T04:26:20.917664003Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 4 04:26:20.918792 containerd[1569]: time="2025-09-04T04:26:20.918746794Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:20.921814 containerd[1569]: time="2025-09-04T04:26:20.921777797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:20.922697 containerd[1569]: time="2025-09-04T04:26:20.922645754Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 2.02353593s" Sep 4 04:26:20.922697 containerd[1569]: time="2025-09-04T04:26:20.922677794Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 4 04:26:20.923647 containerd[1569]: time="2025-09-04T04:26:20.923608329Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 4 04:26:22.699685 containerd[1569]: time="2025-09-04T04:26:22.699584283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:22.701115 containerd[1569]: time="2025-09-04T04:26:22.701045253Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 4 04:26:22.702558 containerd[1569]: time="2025-09-04T04:26:22.702524857Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:22.706683 containerd[1569]: time="2025-09-04T04:26:22.706637228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:22.707682 containerd[1569]: time="2025-09-04T04:26:22.707624389Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.783976446s" Sep 4 04:26:22.707748 containerd[1569]: time="2025-09-04T04:26:22.707684411Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 4 04:26:22.708441 containerd[1569]: time="2025-09-04T04:26:22.708388301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 4 04:26:22.969576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 04:26:22.971278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:23.214279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:23.228264 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:26:23.517210 kubelet[2124]: E0904 04:26:23.517061 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:26:23.521774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:26:23.522011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:26:23.522396 systemd[1]: kubelet.service: Consumed 266ms CPU time, 110.9M memory peak. Sep 4 04:26:26.820486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2944765581.mount: Deactivated successfully. Sep 4 04:26:27.815554 containerd[1569]: time="2025-09-04T04:26:27.815444390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:27.820081 containerd[1569]: time="2025-09-04T04:26:27.819988079Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 4 04:26:27.831879 containerd[1569]: time="2025-09-04T04:26:27.831748039Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:27.842601 containerd[1569]: time="2025-09-04T04:26:27.842491774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:27.843288 containerd[1569]: time="2025-09-04T04:26:27.843214379Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 5.13478473s" Sep 4 04:26:27.843288 containerd[1569]: time="2025-09-04T04:26:27.843284510Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 4 04:26:27.844123 containerd[1569]: time="2025-09-04T04:26:27.844073549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 4 04:26:28.627393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515165676.mount: Deactivated successfully. Sep 4 04:26:29.531363 containerd[1569]: time="2025-09-04T04:26:29.531259312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:29.532483 containerd[1569]: time="2025-09-04T04:26:29.532428945Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 4 04:26:29.534155 containerd[1569]: time="2025-09-04T04:26:29.534065444Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:29.537329 containerd[1569]: time="2025-09-04T04:26:29.537252309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:29.538606 containerd[1569]: time="2025-09-04T04:26:29.538541717Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.694423804s" Sep 4 04:26:29.538606 containerd[1569]: time="2025-09-04T04:26:29.538590719Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 4 04:26:29.539310 containerd[1569]: time="2025-09-04T04:26:29.539242351Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 4 04:26:30.538285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9459681.mount: Deactivated successfully. Sep 4 04:26:30.547054 containerd[1569]: time="2025-09-04T04:26:30.546847670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 04:26:30.547878 containerd[1569]: time="2025-09-04T04:26:30.547784066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 4 04:26:30.549516 containerd[1569]: time="2025-09-04T04:26:30.549434781Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 04:26:30.552064 containerd[1569]: time="2025-09-04T04:26:30.551973872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 04:26:30.552885 containerd[1569]: time="2025-09-04T04:26:30.552807325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.013522956s" Sep 4 04:26:30.552960 containerd[1569]: time="2025-09-04T04:26:30.552893446Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 4 04:26:30.553597 containerd[1569]: time="2025-09-04T04:26:30.553531843Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 4 04:26:31.407354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169177484.mount: Deactivated successfully. Sep 4 04:26:33.719639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 04:26:33.721891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:34.772403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:34.790988 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 04:26:34.888870 kubelet[2256]: E0904 04:26:34.888626 2256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 04:26:34.893757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 04:26:34.894031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 04:26:34.894523 systemd[1]: kubelet.service: Consumed 384ms CPU time, 108.4M memory peak. Sep 4 04:26:35.474327 containerd[1569]: time="2025-09-04T04:26:35.474225685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:35.475469 containerd[1569]: time="2025-09-04T04:26:35.475431374Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 4 04:26:35.477003 containerd[1569]: time="2025-09-04T04:26:35.476942600Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:35.480465 containerd[1569]: time="2025-09-04T04:26:35.480423238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:26:35.481459 containerd[1569]: time="2025-09-04T04:26:35.481428393Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.927857958s" Sep 4 04:26:35.481459 containerd[1569]: time="2025-09-04T04:26:35.481459474Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 4 04:26:39.109513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:39.109682 systemd[1]: kubelet.service: Consumed 384ms CPU time, 108.4M memory peak. Sep 4 04:26:39.111974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:39.140643 systemd[1]: Reload requested from client PID 2299 ('systemctl') (unit session-9.scope)... Sep 4 04:26:39.140661 systemd[1]: Reloading... Sep 4 04:26:39.238944 zram_generator::config[2344]: No configuration found. Sep 4 04:26:39.762520 systemd[1]: Reloading finished in 621 ms. Sep 4 04:26:39.826773 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 04:26:39.826935 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 04:26:39.827355 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:39.827413 systemd[1]: kubelet.service: Consumed 164ms CPU time, 98.2M memory peak. Sep 4 04:26:39.829483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:40.063796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:40.085495 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 04:26:40.132414 kubelet[2390]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:26:40.132414 kubelet[2390]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 04:26:40.132414 kubelet[2390]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:26:40.132414 kubelet[2390]: I0904 04:26:40.131838 2390 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 04:26:40.449379 kubelet[2390]: I0904 04:26:40.449254 2390 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 04:26:40.449379 kubelet[2390]: I0904 04:26:40.449284 2390 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 04:26:40.449575 kubelet[2390]: I0904 04:26:40.449553 2390 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 04:26:40.521129 kubelet[2390]: I0904 04:26:40.521073 2390 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 04:26:40.522715 kubelet[2390]: E0904 04:26:40.522678 2390 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 4 04:26:41.123066 kubelet[2390]: I0904 04:26:41.123027 2390 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 04:26:41.129257 kubelet[2390]: I0904 04:26:41.129234 2390 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 04:26:41.129541 kubelet[2390]: I0904 04:26:41.129494 2390 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 04:26:41.129738 kubelet[2390]: I0904 04:26:41.129528 2390 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 04:26:41.129886 kubelet[2390]: I0904 04:26:41.129746 2390 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 04:26:41.129886 kubelet[2390]: I0904 04:26:41.129757 2390 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 04:26:41.130830 kubelet[2390]: I0904 04:26:41.130798 2390 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:26:41.147509 kubelet[2390]: I0904 04:26:41.147427 2390 kubelet.go:480] "Attempting to sync node with API server" Sep 4 04:26:41.147509 kubelet[2390]: I0904 04:26:41.147462 2390 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 04:26:41.147509 kubelet[2390]: I0904 04:26:41.147497 2390 kubelet.go:386] "Adding apiserver pod source" Sep 4 04:26:41.147509 kubelet[2390]: I0904 04:26:41.147521 2390 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 04:26:41.156610 kubelet[2390]: I0904 04:26:41.155081 2390 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 04:26:41.156610 kubelet[2390]: E0904 04:26:41.156158 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 4 04:26:41.156610 kubelet[2390]: E0904 04:26:41.156183 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 4 04:26:41.157177 kubelet[2390]: I0904 04:26:41.157138 2390 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 04:26:41.157846 kubelet[2390]: W0904 04:26:41.157822 2390 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 04:26:41.160772 kubelet[2390]: I0904 04:26:41.160735 2390 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 04:26:41.160824 kubelet[2390]: I0904 04:26:41.160793 2390 server.go:1289] "Started kubelet" Sep 4 04:26:41.160922 kubelet[2390]: I0904 04:26:41.160889 2390 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 04:26:41.164893 kubelet[2390]: I0904 04:26:41.163307 2390 server.go:317] "Adding debug handlers to kubelet server" Sep 4 04:26:41.164893 kubelet[2390]: I0904 04:26:41.164319 2390 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 04:26:41.168250 kubelet[2390]: I0904 04:26:41.168210 2390 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 04:26:41.170815 kubelet[2390]: E0904 04:26:41.170559 2390 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:26:41.170815 kubelet[2390]: I0904 04:26:41.170600 2390 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 04:26:41.171005 kubelet[2390]: I0904 04:26:41.170975 2390 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 04:26:41.171062 kubelet[2390]: I0904 04:26:41.171046 2390 reconciler.go:26] "Reconciler: start to sync state" Sep 4 04:26:41.171668 kubelet[2390]: E0904 04:26:41.171479 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 4 04:26:41.172057 kubelet[2390]: I0904 04:26:41.171977 2390 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 04:26:41.172423 kubelet[2390]: I0904 04:26:41.172395 2390 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 04:26:41.172423 kubelet[2390]: E0904 04:26:41.172442 2390 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 04:26:41.173053 kubelet[2390]: E0904 04:26:41.173008 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Sep 4 04:26:41.173159 kubelet[2390]: I0904 04:26:41.173133 2390 factory.go:223] Registration of the containerd container factory successfully Sep 4 04:26:41.173159 kubelet[2390]: I0904 04:26:41.173153 2390 factory.go:223] Registration of the systemd container factory successfully Sep 4 04:26:41.173264 kubelet[2390]: I0904 04:26:41.173238 2390 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 04:26:41.183891 kubelet[2390]: E0904 04:26:41.181901 2390 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1861f9cff1c1ba42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-04 04:26:41.160755778 +0000 UTC m=+1.069765664,LastTimestamp:2025-09-04 04:26:41.160755778 +0000 UTC m=+1.069765664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 04:26:41.189107 kubelet[2390]: I0904 04:26:41.189084 2390 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 04:26:41.189219 kubelet[2390]: I0904 04:26:41.189188 2390 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 04:26:41.189219 kubelet[2390]: I0904 04:26:41.189208 2390 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:26:41.195017 kubelet[2390]: I0904 04:26:41.194976 2390 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 04:26:41.196592 kubelet[2390]: I0904 04:26:41.196548 2390 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 04:26:41.196673 kubelet[2390]: I0904 04:26:41.196602 2390 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 04:26:41.196719 kubelet[2390]: I0904 04:26:41.196661 2390 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 04:26:41.196719 kubelet[2390]: I0904 04:26:41.196686 2390 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 04:26:41.197050 kubelet[2390]: E0904 04:26:41.196780 2390 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 04:26:41.197589 kubelet[2390]: E0904 04:26:41.197548 2390 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 4 04:26:41.237676 kubelet[2390]: I0904 04:26:41.237578 2390 policy_none.go:49] "None policy: Start" Sep 4 04:26:41.237676 kubelet[2390]: I0904 04:26:41.237638 2390 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 04:26:41.237676 kubelet[2390]: I0904 04:26:41.237659 2390 state_mem.go:35] "Initializing new in-memory state store" Sep 4 04:26:41.271501 kubelet[2390]: E0904 04:26:41.271407 2390 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:26:41.298018 kubelet[2390]: E0904 04:26:41.297943 2390 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 04:26:41.319579 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 04:26:41.339337 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 04:26:41.342831 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 04:26:41.361040 kubelet[2390]: E0904 04:26:41.360993 2390 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 04:26:41.361360 kubelet[2390]: I0904 04:26:41.361328 2390 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 04:26:41.361360 kubelet[2390]: I0904 04:26:41.361350 2390 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 04:26:41.362094 kubelet[2390]: I0904 04:26:41.362061 2390 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 04:26:41.362681 kubelet[2390]: E0904 04:26:41.362650 2390 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 04:26:41.362741 kubelet[2390]: E0904 04:26:41.362726 2390 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 04:26:41.374159 kubelet[2390]: E0904 04:26:41.374035 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Sep 4 04:26:41.463940 kubelet[2390]: I0904 04:26:41.463829 2390 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 04:26:41.464430 kubelet[2390]: E0904 04:26:41.464375 2390 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Sep 4 04:26:41.514939 systemd[1]: Created slice kubepods-burstable-pod0cdc55172a3edb329dee6c421a43a316.slice - libcontainer container kubepods-burstable-pod0cdc55172a3edb329dee6c421a43a316.slice. Sep 4 04:26:41.544486 kubelet[2390]: E0904 04:26:41.544388 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 04:26:41.548268 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 4 04:26:41.564809 kubelet[2390]: E0904 04:26:41.564743 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 04:26:41.568297 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 4 04:26:41.570813 kubelet[2390]: E0904 04:26:41.570760 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 04:26:41.571813 kubelet[2390]: I0904 04:26:41.571767 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cdc55172a3edb329dee6c421a43a316-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0cdc55172a3edb329dee6c421a43a316\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:41.571813 kubelet[2390]: I0904 04:26:41.571806 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cdc55172a3edb329dee6c421a43a316-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0cdc55172a3edb329dee6c421a43a316\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:41.571982 kubelet[2390]: I0904 04:26:41.571832 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:41.571982 kubelet[2390]: I0904 04:26:41.571884 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:41.571982 kubelet[2390]: I0904 04:26:41.571909 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:41.572097 kubelet[2390]: I0904 04:26:41.571980 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:41.572097 kubelet[2390]: I0904 04:26:41.572029 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:41.572097 kubelet[2390]: I0904 04:26:41.572072 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 4 04:26:41.572097 kubelet[2390]: I0904 04:26:41.572093 2390 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cdc55172a3edb329dee6c421a43a316-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0cdc55172a3edb329dee6c421a43a316\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:41.666423 kubelet[2390]: I0904 04:26:41.666279 2390 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 04:26:41.666994 kubelet[2390]: E0904 04:26:41.666945 2390 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Sep 4 04:26:41.775335 kubelet[2390]: E0904 04:26:41.775285 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Sep 4 04:26:41.845769 kubelet[2390]: E0904 04:26:41.845690 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:41.846595 containerd[1569]: time="2025-09-04T04:26:41.846523689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0cdc55172a3edb329dee6c421a43a316,Namespace:kube-system,Attempt:0,}" Sep 4 04:26:41.865931 kubelet[2390]: E0904 04:26:41.865838 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:41.866653 containerd[1569]: time="2025-09-04T04:26:41.866589979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 4 04:26:41.872305 containerd[1569]: time="2025-09-04T04:26:41.871562501Z" level=info msg="connecting to shim 1f4ff44152373807207e7f44816c8fe034ea982b3b344767ca68ddfa11ac3665" address="unix:///run/containerd/s/740aabeb94fb1f9f059c65fcfa45bc0cd29eaa0ce1597371ac65fabfd13b8e4f" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:41.872429 kubelet[2390]: E0904 04:26:41.871747 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:41.872526 containerd[1569]: time="2025-09-04T04:26:41.872495095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 4 04:26:41.902145 systemd[1]: Started cri-containerd-1f4ff44152373807207e7f44816c8fe034ea982b3b344767ca68ddfa11ac3665.scope - libcontainer container 1f4ff44152373807207e7f44816c8fe034ea982b3b344767ca68ddfa11ac3665. Sep 4 04:26:41.908457 containerd[1569]: time="2025-09-04T04:26:41.908396546Z" level=info msg="connecting to shim 0a916f265377a6f119b4342b3d5ec9744483a6c327db016e910742855e071cee" address="unix:///run/containerd/s/ac11664172d547acffabfa28dc3544dda740d2cb44a07a2135f3907a03a77145" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:41.929991 containerd[1569]: time="2025-09-04T04:26:41.929770243Z" level=info msg="connecting to shim 4d1aec0950d545926269c77033b484c90ed63ca838b4668b18981ea55e97caa8" address="unix:///run/containerd/s/b818904c7763da821aa2df2e6b1712d299ebf84035c8335d70cb54befe3989d3" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:41.943074 systemd[1]: Started cri-containerd-0a916f265377a6f119b4342b3d5ec9744483a6c327db016e910742855e071cee.scope - libcontainer container 0a916f265377a6f119b4342b3d5ec9744483a6c327db016e910742855e071cee. Sep 4 04:26:41.963101 systemd[1]: Started cri-containerd-4d1aec0950d545926269c77033b484c90ed63ca838b4668b18981ea55e97caa8.scope - libcontainer container 4d1aec0950d545926269c77033b484c90ed63ca838b4668b18981ea55e97caa8. Sep 4 04:26:41.972166 containerd[1569]: time="2025-09-04T04:26:41.972116957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0cdc55172a3edb329dee6c421a43a316,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f4ff44152373807207e7f44816c8fe034ea982b3b344767ca68ddfa11ac3665\"" Sep 4 04:26:41.974567 kubelet[2390]: E0904 04:26:41.974466 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:41.983459 containerd[1569]: time="2025-09-04T04:26:41.983415756Z" level=info msg="CreateContainer within sandbox \"1f4ff44152373807207e7f44816c8fe034ea982b3b344767ca68ddfa11ac3665\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 04:26:41.998808 containerd[1569]: time="2025-09-04T04:26:41.998757487Z" level=info msg="Container bdf7116caa71855569ad6b308c600243d4deeac042e52e31a40dd9e4a82d5433: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:42.008108 containerd[1569]: time="2025-09-04T04:26:42.008063732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a916f265377a6f119b4342b3d5ec9744483a6c327db016e910742855e071cee\"" Sep 4 04:26:42.009144 kubelet[2390]: E0904 04:26:42.009101 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:42.011596 containerd[1569]: time="2025-09-04T04:26:42.011554007Z" level=info msg="CreateContainer within sandbox \"1f4ff44152373807207e7f44816c8fe034ea982b3b344767ca68ddfa11ac3665\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bdf7116caa71855569ad6b308c600243d4deeac042e52e31a40dd9e4a82d5433\"" Sep 4 04:26:42.014264 containerd[1569]: time="2025-09-04T04:26:42.014212140Z" level=info msg="CreateContainer within sandbox \"0a916f265377a6f119b4342b3d5ec9744483a6c327db016e910742855e071cee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 04:26:42.016112 containerd[1569]: time="2025-09-04T04:26:42.016059722Z" level=info msg="StartContainer for \"bdf7116caa71855569ad6b308c600243d4deeac042e52e31a40dd9e4a82d5433\"" Sep 4 04:26:42.017813 containerd[1569]: time="2025-09-04T04:26:42.017393228Z" level=info msg="connecting to shim bdf7116caa71855569ad6b308c600243d4deeac042e52e31a40dd9e4a82d5433" address="unix:///run/containerd/s/740aabeb94fb1f9f059c65fcfa45bc0cd29eaa0ce1597371ac65fabfd13b8e4f" protocol=ttrpc version=3 Sep 4 04:26:42.021529 containerd[1569]: time="2025-09-04T04:26:42.021467342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1aec0950d545926269c77033b484c90ed63ca838b4668b18981ea55e97caa8\"" Sep 4 04:26:42.024595 kubelet[2390]: E0904 04:26:42.023993 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:42.025580 containerd[1569]: time="2025-09-04T04:26:42.025541076Z" level=info msg="Container 2d0734280f06d639394a7c7275a8d4060d7e3777ae8499a33f07c38d3be93680: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:42.030339 containerd[1569]: time="2025-09-04T04:26:42.030297569Z" level=info msg="CreateContainer within sandbox \"4d1aec0950d545926269c77033b484c90ed63ca838b4668b18981ea55e97caa8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 04:26:42.037559 containerd[1569]: time="2025-09-04T04:26:42.037490191Z" level=info msg="CreateContainer within sandbox \"0a916f265377a6f119b4342b3d5ec9744483a6c327db016e910742855e071cee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2d0734280f06d639394a7c7275a8d4060d7e3777ae8499a33f07c38d3be93680\"" Sep 4 04:26:42.038678 containerd[1569]: time="2025-09-04T04:26:42.038607545Z" level=info msg="StartContainer for \"2d0734280f06d639394a7c7275a8d4060d7e3777ae8499a33f07c38d3be93680\"" Sep 4 04:26:42.040586 containerd[1569]: time="2025-09-04T04:26:42.040549457Z" level=info msg="connecting to shim 2d0734280f06d639394a7c7275a8d4060d7e3777ae8499a33f07c38d3be93680" address="unix:///run/containerd/s/ac11664172d547acffabfa28dc3544dda740d2cb44a07a2135f3907a03a77145" protocol=ttrpc version=3 Sep 4 04:26:42.043001 containerd[1569]: time="2025-09-04T04:26:42.042948238Z" level=info msg="Container 4e6d113f4c472ac789ffe12b68b9ad69d7b1518c5b4550611d4d5b5f421a920c: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:42.045096 systemd[1]: Started cri-containerd-bdf7116caa71855569ad6b308c600243d4deeac042e52e31a40dd9e4a82d5433.scope - libcontainer container bdf7116caa71855569ad6b308c600243d4deeac042e52e31a40dd9e4a82d5433. Sep 4 04:26:42.053226 containerd[1569]: time="2025-09-04T04:26:42.053149550Z" level=info msg="CreateContainer within sandbox \"4d1aec0950d545926269c77033b484c90ed63ca838b4668b18981ea55e97caa8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4e6d113f4c472ac789ffe12b68b9ad69d7b1518c5b4550611d4d5b5f421a920c\"" Sep 4 04:26:42.054013 containerd[1569]: time="2025-09-04T04:26:42.053783425Z" level=info msg="StartContainer for \"4e6d113f4c472ac789ffe12b68b9ad69d7b1518c5b4550611d4d5b5f421a920c\"" Sep 4 04:26:42.055396 containerd[1569]: time="2025-09-04T04:26:42.055366515Z" level=info msg="connecting to shim 4e6d113f4c472ac789ffe12b68b9ad69d7b1518c5b4550611d4d5b5f421a920c" address="unix:///run/containerd/s/b818904c7763da821aa2df2e6b1712d299ebf84035c8335d70cb54befe3989d3" protocol=ttrpc version=3 Sep 4 04:26:42.067053 systemd[1]: Started cri-containerd-2d0734280f06d639394a7c7275a8d4060d7e3777ae8499a33f07c38d3be93680.scope - libcontainer container 2d0734280f06d639394a7c7275a8d4060d7e3777ae8499a33f07c38d3be93680. Sep 4 04:26:42.071243 kubelet[2390]: I0904 04:26:42.069600 2390 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 04:26:42.071243 kubelet[2390]: E0904 04:26:42.070206 2390 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Sep 4 04:26:42.090105 systemd[1]: Started cri-containerd-4e6d113f4c472ac789ffe12b68b9ad69d7b1518c5b4550611d4d5b5f421a920c.scope - libcontainer container 4e6d113f4c472ac789ffe12b68b9ad69d7b1518c5b4550611d4d5b5f421a920c. Sep 4 04:26:42.129639 containerd[1569]: time="2025-09-04T04:26:42.129590463Z" level=info msg="StartContainer for \"bdf7116caa71855569ad6b308c600243d4deeac042e52e31a40dd9e4a82d5433\" returns successfully" Sep 4 04:26:42.147225 containerd[1569]: time="2025-09-04T04:26:42.147080732Z" level=info msg="StartContainer for \"2d0734280f06d639394a7c7275a8d4060d7e3777ae8499a33f07c38d3be93680\" returns successfully" Sep 4 04:26:42.182903 containerd[1569]: time="2025-09-04T04:26:42.181983030Z" level=info msg="StartContainer for \"4e6d113f4c472ac789ffe12b68b9ad69d7b1518c5b4550611d4d5b5f421a920c\" returns successfully" Sep 4 04:26:42.209825 kubelet[2390]: E0904 04:26:42.209782 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 04:26:42.210705 kubelet[2390]: E0904 04:26:42.210665 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:42.218894 kubelet[2390]: E0904 04:26:42.217528 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 04:26:42.218894 kubelet[2390]: E0904 04:26:42.217798 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:42.220493 kubelet[2390]: E0904 04:26:42.220452 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 04:26:42.220820 kubelet[2390]: E0904 04:26:42.220783 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:42.873042 kubelet[2390]: I0904 04:26:42.873005 2390 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 04:26:43.225185 kubelet[2390]: E0904 04:26:43.225123 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 04:26:43.225712 kubelet[2390]: E0904 04:26:43.225265 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:43.231840 kubelet[2390]: E0904 04:26:43.231798 2390 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 4 04:26:43.232033 kubelet[2390]: E0904 04:26:43.231998 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:44.032984 kubelet[2390]: E0904 04:26:44.032883 2390 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 04:26:44.133839 kubelet[2390]: I0904 04:26:44.132212 2390 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 04:26:44.133839 kubelet[2390]: E0904 04:26:44.133675 2390 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 4 04:26:44.155633 kubelet[2390]: I0904 04:26:44.155586 2390 apiserver.go:52] "Watching apiserver" Sep 4 04:26:44.171767 kubelet[2390]: I0904 04:26:44.171692 2390 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 04:26:44.173209 kubelet[2390]: I0904 04:26:44.172780 2390 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:44.179952 kubelet[2390]: E0904 04:26:44.179902 2390 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:44.179952 kubelet[2390]: I0904 04:26:44.179942 2390 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:44.181506 kubelet[2390]: E0904 04:26:44.181472 2390 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:44.181506 kubelet[2390]: I0904 04:26:44.181498 2390 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 04:26:44.182900 kubelet[2390]: E0904 04:26:44.182754 2390 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 04:26:44.223041 kubelet[2390]: I0904 04:26:44.222997 2390 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 04:26:44.225126 kubelet[2390]: E0904 04:26:44.225092 2390 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 4 04:26:44.225299 kubelet[2390]: E0904 04:26:44.225281 2390 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:44.360160 update_engine[1549]: I20250904 04:26:44.360002 1549 update_attempter.cc:509] Updating boot flags... Sep 4 04:26:46.015230 systemd[1]: Reload requested from client PID 2693 ('systemctl') (unit session-9.scope)... Sep 4 04:26:46.015249 systemd[1]: Reloading... Sep 4 04:26:46.090916 zram_generator::config[2736]: No configuration found. Sep 4 04:26:46.345638 systemd[1]: Reloading finished in 330 ms. Sep 4 04:26:46.376714 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:46.397231 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 04:26:46.397540 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:46.397597 systemd[1]: kubelet.service: Consumed 993ms CPU time, 135.1M memory peak. Sep 4 04:26:46.399524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 04:26:46.624476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 04:26:46.645352 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 04:26:46.688900 kubelet[2781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:26:46.688900 kubelet[2781]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 4 04:26:46.688900 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 04:26:46.689426 kubelet[2781]: I0904 04:26:46.688973 2781 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 04:26:46.697338 kubelet[2781]: I0904 04:26:46.697295 2781 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 4 04:26:46.697338 kubelet[2781]: I0904 04:26:46.697318 2781 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 04:26:46.697551 kubelet[2781]: I0904 04:26:46.697518 2781 server.go:956] "Client rotation is on, will bootstrap in background" Sep 4 04:26:46.698747 kubelet[2781]: I0904 04:26:46.698707 2781 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 4 04:26:46.701169 kubelet[2781]: I0904 04:26:46.701054 2781 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 04:26:46.707949 kubelet[2781]: I0904 04:26:46.707361 2781 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 4 04:26:46.713046 kubelet[2781]: I0904 04:26:46.713024 2781 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 04:26:46.713296 kubelet[2781]: I0904 04:26:46.713259 2781 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 04:26:46.713462 kubelet[2781]: I0904 04:26:46.713288 2781 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 4 04:26:46.713545 kubelet[2781]: I0904 04:26:46.713467 2781 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 04:26:46.713545 kubelet[2781]: I0904 04:26:46.713477 2781 container_manager_linux.go:303] "Creating device plugin manager" Sep 4 04:26:46.713545 kubelet[2781]: I0904 04:26:46.713528 2781 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:26:46.713724 kubelet[2781]: I0904 04:26:46.713686 2781 kubelet.go:480] "Attempting to sync node with API server" Sep 4 04:26:46.713724 kubelet[2781]: I0904 04:26:46.713704 2781 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 04:26:46.713724 kubelet[2781]: I0904 04:26:46.713724 2781 kubelet.go:386] "Adding apiserver pod source" Sep 4 04:26:46.713724 kubelet[2781]: I0904 04:26:46.713735 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 04:26:46.716884 kubelet[2781]: I0904 04:26:46.715358 2781 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 4 04:26:46.716884 kubelet[2781]: I0904 04:26:46.715903 2781 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 4 04:26:46.719074 kubelet[2781]: I0904 04:26:46.719029 2781 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 4 04:26:46.719074 kubelet[2781]: I0904 04:26:46.719071 2781 server.go:1289] "Started kubelet" Sep 4 04:26:46.720641 kubelet[2781]: I0904 04:26:46.720531 2781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 04:26:46.720801 kubelet[2781]: I0904 04:26:46.720663 2781 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 04:26:46.721401 kubelet[2781]: I0904 04:26:46.721184 2781 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 04:26:46.722238 kubelet[2781]: I0904 04:26:46.722138 2781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 04:26:46.726201 kubelet[2781]: I0904 04:26:46.724684 2781 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 4 04:26:46.726449 kubelet[2781]: I0904 04:26:46.726420 2781 server.go:317] "Adding debug handlers to kubelet server" Sep 4 04:26:46.728462 kubelet[2781]: E0904 04:26:46.728410 2781 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 04:26:46.728522 kubelet[2781]: I0904 04:26:46.728490 2781 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 4 04:26:46.729873 kubelet[2781]: I0904 04:26:46.729818 2781 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 4 04:26:46.730062 kubelet[2781]: I0904 04:26:46.730043 2781 reconciler.go:26] "Reconciler: start to sync state" Sep 4 04:26:46.732554 kubelet[2781]: E0904 04:26:46.732377 2781 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 04:26:46.733807 kubelet[2781]: I0904 04:26:46.733767 2781 factory.go:223] Registration of the systemd container factory successfully Sep 4 04:26:46.734368 kubelet[2781]: I0904 04:26:46.733930 2781 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 04:26:46.735757 kubelet[2781]: I0904 04:26:46.735735 2781 factory.go:223] Registration of the containerd container factory successfully Sep 4 04:26:46.741057 kubelet[2781]: I0904 04:26:46.741025 2781 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 4 04:26:46.750290 kubelet[2781]: I0904 04:26:46.750270 2781 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 4 04:26:46.750290 kubelet[2781]: I0904 04:26:46.750290 2781 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 4 04:26:46.750382 kubelet[2781]: I0904 04:26:46.750309 2781 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 4 04:26:46.750382 kubelet[2781]: I0904 04:26:46.750316 2781 kubelet.go:2436] "Starting kubelet main sync loop" Sep 4 04:26:46.750470 kubelet[2781]: E0904 04:26:46.750377 2781 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 04:26:46.776442 kubelet[2781]: I0904 04:26:46.776414 2781 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 4 04:26:46.776442 kubelet[2781]: I0904 04:26:46.776430 2781 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 4 04:26:46.776442 kubelet[2781]: I0904 04:26:46.776449 2781 state_mem.go:36] "Initialized new in-memory state store" Sep 4 04:26:46.776624 kubelet[2781]: I0904 04:26:46.776571 2781 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 04:26:46.776624 kubelet[2781]: I0904 04:26:46.776580 2781 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 04:26:46.776624 kubelet[2781]: I0904 04:26:46.776604 2781 policy_none.go:49] "None policy: Start" Sep 4 04:26:46.776624 kubelet[2781]: I0904 04:26:46.776613 2781 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 4 04:26:46.776624 kubelet[2781]: I0904 04:26:46.776623 2781 state_mem.go:35] "Initializing new in-memory state store" Sep 4 04:26:46.776742 kubelet[2781]: I0904 04:26:46.776733 2781 state_mem.go:75] "Updated machine memory state" Sep 4 04:26:46.780962 kubelet[2781]: E0904 04:26:46.780932 2781 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 4 04:26:46.781206 kubelet[2781]: I0904 04:26:46.781182 2781 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 4 04:26:46.781251 kubelet[2781]: I0904 04:26:46.781203 2781 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 04:26:46.781606 kubelet[2781]: I0904 04:26:46.781503 2781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 04:26:46.782977 kubelet[2781]: E0904 04:26:46.782953 2781 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 4 04:26:46.851895 kubelet[2781]: I0904 04:26:46.851285 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 4 04:26:46.851895 kubelet[2781]: I0904 04:26:46.851666 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:46.851895 kubelet[2781]: I0904 04:26:46.851680 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:46.886308 kubelet[2781]: I0904 04:26:46.886106 2781 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 4 04:26:46.899392 kubelet[2781]: I0904 04:26:46.899333 2781 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 4 04:26:46.899572 kubelet[2781]: I0904 04:26:46.899458 2781 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 4 04:26:46.931568 kubelet[2781]: I0904 04:26:46.931483 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0cdc55172a3edb329dee6c421a43a316-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0cdc55172a3edb329dee6c421a43a316\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:46.931568 kubelet[2781]: I0904 04:26:46.931541 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:46.931568 kubelet[2781]: I0904 04:26:46.931569 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 4 04:26:46.931892 kubelet[2781]: I0904 04:26:46.931589 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0cdc55172a3edb329dee6c421a43a316-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0cdc55172a3edb329dee6c421a43a316\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:46.931892 kubelet[2781]: I0904 04:26:46.931629 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:46.931892 kubelet[2781]: I0904 04:26:46.931690 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:46.931892 kubelet[2781]: I0904 04:26:46.931727 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:46.931892 kubelet[2781]: I0904 04:26:46.931746 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 04:26:46.932042 kubelet[2781]: I0904 04:26:46.931831 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0cdc55172a3edb329dee6c421a43a316-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0cdc55172a3edb329dee6c421a43a316\") " pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:47.020072 sudo[2822]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 4 04:26:47.020508 sudo[2822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 4 04:26:47.195021 kubelet[2781]: E0904 04:26:47.194895 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:47.195021 kubelet[2781]: E0904 04:26:47.195011 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:47.195887 kubelet[2781]: E0904 04:26:47.195761 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:47.330958 sudo[2822]: pam_unix(sudo:session): session closed for user root Sep 4 04:26:47.714684 kubelet[2781]: I0904 04:26:47.714561 2781 apiserver.go:52] "Watching apiserver" Sep 4 04:26:47.730814 kubelet[2781]: I0904 04:26:47.730753 2781 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 4 04:26:47.764188 kubelet[2781]: E0904 04:26:47.764109 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:47.764672 kubelet[2781]: I0904 04:26:47.764651 2781 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:47.765209 kubelet[2781]: E0904 04:26:47.765138 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:47.771286 kubelet[2781]: E0904 04:26:47.771255 2781 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 04:26:47.771563 kubelet[2781]: E0904 04:26:47.771543 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:47.796804 kubelet[2781]: I0904 04:26:47.796730 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.796699102 podStartE2EDuration="1.796699102s" podCreationTimestamp="2025-09-04 04:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:26:47.787061079 +0000 UTC m=+1.136381359" watchObservedRunningTime="2025-09-04 04:26:47.796699102 +0000 UTC m=+1.146019392" Sep 4 04:26:47.805426 kubelet[2781]: I0904 04:26:47.805357 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8053426510000001 podStartE2EDuration="1.805342651s" podCreationTimestamp="2025-09-04 04:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:26:47.797094089 +0000 UTC m=+1.146414380" watchObservedRunningTime="2025-09-04 04:26:47.805342651 +0000 UTC m=+1.154662941" Sep 4 04:26:47.814007 kubelet[2781]: I0904 04:26:47.813952 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.813943319 podStartE2EDuration="1.813943319s" podCreationTimestamp="2025-09-04 04:26:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:26:47.805681613 +0000 UTC m=+1.155001903" watchObservedRunningTime="2025-09-04 04:26:47.813943319 +0000 UTC m=+1.163263609" Sep 4 04:26:48.781435 kubelet[2781]: E0904 04:26:48.781213 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:48.781435 kubelet[2781]: E0904 04:26:48.781221 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:49.415432 sudo[1798]: pam_unix(sudo:session): session closed for user root Sep 4 04:26:49.418112 sshd[1797]: Connection closed by 10.0.0.1 port 41336 Sep 4 04:26:49.419144 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Sep 4 04:26:49.428169 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:41336.service: Deactivated successfully. Sep 4 04:26:49.433518 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 04:26:49.436049 systemd[1]: session-9.scope: Consumed 6.569s CPU time, 263.7M memory peak. Sep 4 04:26:49.439271 systemd-logind[1541]: Session 9 logged out. Waiting for processes to exit. Sep 4 04:26:49.441655 systemd-logind[1541]: Removed session 9. Sep 4 04:26:49.774271 kubelet[2781]: E0904 04:26:49.774209 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:50.948465 kubelet[2781]: I0904 04:26:50.948388 2781 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 04:26:50.949614 containerd[1569]: time="2025-09-04T04:26:50.949556207Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 04:26:50.950066 kubelet[2781]: I0904 04:26:50.949900 2781 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 04:26:51.876886 systemd[1]: Created slice kubepods-besteffort-podb68cc79f_f200_4adf_8ac0_e1072d1cb8d9.slice - libcontainer container kubepods-besteffort-podb68cc79f_f200_4adf_8ac0_e1072d1cb8d9.slice. Sep 4 04:26:51.890008 systemd[1]: Created slice kubepods-burstable-pod272d1165_2638_4a13_9d03_5f65b1025287.slice - libcontainer container kubepods-burstable-pod272d1165_2638_4a13_9d03_5f65b1025287.slice. Sep 4 04:26:51.965986 kubelet[2781]: I0904 04:26:51.965939 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b68cc79f-f200-4adf-8ac0-e1072d1cb8d9-xtables-lock\") pod \"kube-proxy-dvm45\" (UID: \"b68cc79f-f200-4adf-8ac0-e1072d1cb8d9\") " pod="kube-system/kube-proxy-dvm45" Sep 4 04:26:51.965986 kubelet[2781]: I0904 04:26:51.965977 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-hostproc\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.965986 kubelet[2781]: I0904 04:26:51.966000 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cilium-cgroup\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966557 kubelet[2781]: I0904 04:26:51.966015 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/272d1165-2638-4a13-9d03-5f65b1025287-clustermesh-secrets\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966557 kubelet[2781]: I0904 04:26:51.966050 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/272d1165-2638-4a13-9d03-5f65b1025287-cilium-config-path\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966557 kubelet[2781]: I0904 04:26:51.966068 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-host-proc-sys-kernel\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966557 kubelet[2781]: I0904 04:26:51.966122 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/272d1165-2638-4a13-9d03-5f65b1025287-hubble-tls\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966557 kubelet[2781]: I0904 04:26:51.966139 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b68cc79f-f200-4adf-8ac0-e1072d1cb8d9-lib-modules\") pod \"kube-proxy-dvm45\" (UID: \"b68cc79f-f200-4adf-8ac0-e1072d1cb8d9\") " pod="kube-system/kube-proxy-dvm45" Sep 4 04:26:51.966683 kubelet[2781]: I0904 04:26:51.966152 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-xtables-lock\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966683 kubelet[2781]: I0904 04:26:51.966168 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cilium-run\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966683 kubelet[2781]: I0904 04:26:51.966186 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-bpf-maps\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966683 kubelet[2781]: I0904 04:26:51.966198 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cni-path\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966683 kubelet[2781]: I0904 04:26:51.966210 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-etc-cni-netd\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966683 kubelet[2781]: I0904 04:26:51.966224 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-lib-modules\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966832 kubelet[2781]: I0904 04:26:51.966253 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b68cc79f-f200-4adf-8ac0-e1072d1cb8d9-kube-proxy\") pod \"kube-proxy-dvm45\" (UID: \"b68cc79f-f200-4adf-8ac0-e1072d1cb8d9\") " pod="kube-system/kube-proxy-dvm45" Sep 4 04:26:51.966832 kubelet[2781]: I0904 04:26:51.966268 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkfwg\" (UniqueName: \"kubernetes.io/projected/b68cc79f-f200-4adf-8ac0-e1072d1cb8d9-kube-api-access-tkfwg\") pod \"kube-proxy-dvm45\" (UID: \"b68cc79f-f200-4adf-8ac0-e1072d1cb8d9\") " pod="kube-system/kube-proxy-dvm45" Sep 4 04:26:51.966832 kubelet[2781]: I0904 04:26:51.966284 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-host-proc-sys-net\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:51.966832 kubelet[2781]: I0904 04:26:51.966299 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmt6l\" (UniqueName: \"kubernetes.io/projected/272d1165-2638-4a13-9d03-5f65b1025287-kube-api-access-zmt6l\") pod \"cilium-s2n8f\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " pod="kube-system/cilium-s2n8f" Sep 4 04:26:52.067446 systemd[1]: Created slice kubepods-besteffort-pod332e4948_c0e8_4698_8cb1_4a68650a04f3.slice - libcontainer container kubepods-besteffort-pod332e4948_c0e8_4698_8cb1_4a68650a04f3.slice. Sep 4 04:26:52.069892 kubelet[2781]: I0904 04:26:52.069561 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/332e4948-c0e8-4698-8cb1-4a68650a04f3-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-p587m\" (UID: \"332e4948-c0e8-4698-8cb1-4a68650a04f3\") " pod="kube-system/cilium-operator-6c4d7847fc-p587m" Sep 4 04:26:52.069892 kubelet[2781]: I0904 04:26:52.069612 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmqm\" (UniqueName: \"kubernetes.io/projected/332e4948-c0e8-4698-8cb1-4a68650a04f3-kube-api-access-fxmqm\") pod \"cilium-operator-6c4d7847fc-p587m\" (UID: \"332e4948-c0e8-4698-8cb1-4a68650a04f3\") " pod="kube-system/cilium-operator-6c4d7847fc-p587m" Sep 4 04:26:52.187402 kubelet[2781]: E0904 04:26:52.187261 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:52.188592 containerd[1569]: time="2025-09-04T04:26:52.188523749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvm45,Uid:b68cc79f-f200-4adf-8ac0-e1072d1cb8d9,Namespace:kube-system,Attempt:0,}" Sep 4 04:26:52.195481 kubelet[2781]: E0904 04:26:52.195451 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:52.196097 containerd[1569]: time="2025-09-04T04:26:52.196051316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s2n8f,Uid:272d1165-2638-4a13-9d03-5f65b1025287,Namespace:kube-system,Attempt:0,}" Sep 4 04:26:52.265630 containerd[1569]: time="2025-09-04T04:26:52.264613976Z" level=info msg="connecting to shim becdd870c9ff3cc48ce4a895e7ab6514e35e31d5f9e5cde978ba2693ea089627" address="unix:///run/containerd/s/48ddd60436deea008776aa700477207c33325f77e2cbd69d7f59c5794e7e114d" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:52.267604 containerd[1569]: time="2025-09-04T04:26:52.267193908Z" level=info msg="connecting to shim b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca" address="unix:///run/containerd/s/de1af1b6d2f64a8681afeade7e50d4fc4a0afb0b3d534720512185a63993806b" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:52.336129 systemd[1]: Started cri-containerd-b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca.scope - libcontainer container b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca. Sep 4 04:26:52.338400 systemd[1]: Started cri-containerd-becdd870c9ff3cc48ce4a895e7ab6514e35e31d5f9e5cde978ba2693ea089627.scope - libcontainer container becdd870c9ff3cc48ce4a895e7ab6514e35e31d5f9e5cde978ba2693ea089627. Sep 4 04:26:52.372136 kubelet[2781]: E0904 04:26:52.372059 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:52.373193 containerd[1569]: time="2025-09-04T04:26:52.372819245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p587m,Uid:332e4948-c0e8-4698-8cb1-4a68650a04f3,Namespace:kube-system,Attempt:0,}" Sep 4 04:26:52.374522 containerd[1569]: time="2025-09-04T04:26:52.374471155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s2n8f,Uid:272d1165-2638-4a13-9d03-5f65b1025287,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\"" Sep 4 04:26:52.375459 kubelet[2781]: E0904 04:26:52.375416 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:52.377464 containerd[1569]: time="2025-09-04T04:26:52.377428370Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 4 04:26:52.377995 containerd[1569]: time="2025-09-04T04:26:52.377963500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvm45,Uid:b68cc79f-f200-4adf-8ac0-e1072d1cb8d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"becdd870c9ff3cc48ce4a895e7ab6514e35e31d5f9e5cde978ba2693ea089627\"" Sep 4 04:26:52.379015 kubelet[2781]: E0904 04:26:52.378980 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:52.387288 containerd[1569]: time="2025-09-04T04:26:52.387212979Z" level=info msg="CreateContainer within sandbox \"becdd870c9ff3cc48ce4a895e7ab6514e35e31d5f9e5cde978ba2693ea089627\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 04:26:52.405142 containerd[1569]: time="2025-09-04T04:26:52.404027906Z" level=info msg="Container 8eda8dc2269366becb8caf0ff08951c0dde255b84e1e7cb31b6dae7bbf5f5611: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:26:52.408144 containerd[1569]: time="2025-09-04T04:26:52.408085550Z" level=info msg="connecting to shim 4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79" address="unix:///run/containerd/s/a3db845faa5a4c318abb40dec3c84f0a03e506e98770751e48510f258b431061" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:26:52.413542 containerd[1569]: time="2025-09-04T04:26:52.413507039Z" level=info msg="CreateContainer within sandbox \"becdd870c9ff3cc48ce4a895e7ab6514e35e31d5f9e5cde978ba2693ea089627\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8eda8dc2269366becb8caf0ff08951c0dde255b84e1e7cb31b6dae7bbf5f5611\"" Sep 4 04:26:52.414836 containerd[1569]: time="2025-09-04T04:26:52.414763312Z" level=info msg="StartContainer for \"8eda8dc2269366becb8caf0ff08951c0dde255b84e1e7cb31b6dae7bbf5f5611\"" Sep 4 04:26:52.418891 containerd[1569]: time="2025-09-04T04:26:52.416272261Z" level=info msg="connecting to shim 8eda8dc2269366becb8caf0ff08951c0dde255b84e1e7cb31b6dae7bbf5f5611" address="unix:///run/containerd/s/48ddd60436deea008776aa700477207c33325f77e2cbd69d7f59c5794e7e114d" protocol=ttrpc version=3 Sep 4 04:26:52.441086 systemd[1]: Started cri-containerd-4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79.scope - libcontainer container 4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79. Sep 4 04:26:52.446227 systemd[1]: Started cri-containerd-8eda8dc2269366becb8caf0ff08951c0dde255b84e1e7cb31b6dae7bbf5f5611.scope - libcontainer container 8eda8dc2269366becb8caf0ff08951c0dde255b84e1e7cb31b6dae7bbf5f5611. Sep 4 04:26:52.495650 containerd[1569]: time="2025-09-04T04:26:52.495572460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p587m,Uid:332e4948-c0e8-4698-8cb1-4a68650a04f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79\"" Sep 4 04:26:52.496896 kubelet[2781]: E0904 04:26:52.496627 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:52.512984 containerd[1569]: time="2025-09-04T04:26:52.512921526Z" level=info msg="StartContainer for \"8eda8dc2269366becb8caf0ff08951c0dde255b84e1e7cb31b6dae7bbf5f5611\" returns successfully" Sep 4 04:26:52.786765 kubelet[2781]: E0904 04:26:52.786316 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:52.797395 kubelet[2781]: I0904 04:26:52.797282 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dvm45" podStartSLOduration=1.797255957 podStartE2EDuration="1.797255957s" podCreationTimestamp="2025-09-04 04:26:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:26:52.796744872 +0000 UTC m=+6.146065162" watchObservedRunningTime="2025-09-04 04:26:52.797255957 +0000 UTC m=+6.146576248" Sep 4 04:26:53.281252 kubelet[2781]: E0904 04:26:53.281190 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:53.787554 kubelet[2781]: E0904 04:26:53.787512 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:57.712445 kubelet[2781]: E0904 04:26:57.712382 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:58.141399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4263191832.mount: Deactivated successfully. Sep 4 04:26:58.184356 kubelet[2781]: E0904 04:26:58.184317 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:26:58.796444 kubelet[2781]: E0904 04:26:58.796401 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:05.377982 containerd[1569]: time="2025-09-04T04:27:05.377904686Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:05.406413 containerd[1569]: time="2025-09-04T04:27:05.406275078Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 4 04:27:05.414940 containerd[1569]: time="2025-09-04T04:27:05.414824037Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:05.416347 containerd[1569]: time="2025-09-04T04:27:05.416287801Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.038819607s" Sep 4 04:27:05.416347 containerd[1569]: time="2025-09-04T04:27:05.416336082Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 4 04:27:05.417528 containerd[1569]: time="2025-09-04T04:27:05.417487918Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 4 04:27:05.461738 containerd[1569]: time="2025-09-04T04:27:05.461652336Z" level=info msg="CreateContainer within sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 04:27:05.479222 containerd[1569]: time="2025-09-04T04:27:05.479146091Z" level=info msg="Container 559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:05.484472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2014108370.mount: Deactivated successfully. Sep 4 04:27:05.488736 containerd[1569]: time="2025-09-04T04:27:05.488671266Z" level=info msg="CreateContainer within sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\"" Sep 4 04:27:05.489228 containerd[1569]: time="2025-09-04T04:27:05.489174673Z" level=info msg="StartContainer for \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\"" Sep 4 04:27:05.490469 containerd[1569]: time="2025-09-04T04:27:05.490437578Z" level=info msg="connecting to shim 559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf" address="unix:///run/containerd/s/de1af1b6d2f64a8681afeade7e50d4fc4a0afb0b3d534720512185a63993806b" protocol=ttrpc version=3 Sep 4 04:27:05.520044 systemd[1]: Started cri-containerd-559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf.scope - libcontainer container 559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf. Sep 4 04:27:05.574501 systemd[1]: cri-containerd-559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf.scope: Deactivated successfully. Sep 4 04:27:05.578573 containerd[1569]: time="2025-09-04T04:27:05.578518639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\" id:\"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\" pid:3212 exited_at:{seconds:1756960025 nanos:577206160}" Sep 4 04:27:05.679733 containerd[1569]: time="2025-09-04T04:27:05.679519861Z" level=info msg="received exit event container_id:\"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\" id:\"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\" pid:3212 exited_at:{seconds:1756960025 nanos:577206160}" Sep 4 04:27:05.680978 containerd[1569]: time="2025-09-04T04:27:05.680932688Z" level=info msg="StartContainer for \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\" returns successfully" Sep 4 04:27:05.702576 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf-rootfs.mount: Deactivated successfully. Sep 4 04:27:05.890783 kubelet[2781]: E0904 04:27:05.890728 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:06.894900 kubelet[2781]: E0904 04:27:06.894834 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:06.901033 containerd[1569]: time="2025-09-04T04:27:06.900830037Z" level=info msg="CreateContainer within sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 04:27:06.914815 containerd[1569]: time="2025-09-04T04:27:06.914758300Z" level=info msg="Container b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:06.927554 containerd[1569]: time="2025-09-04T04:27:06.927491845Z" level=info msg="CreateContainer within sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\"" Sep 4 04:27:06.928410 containerd[1569]: time="2025-09-04T04:27:06.928373393Z" level=info msg="StartContainer for \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\"" Sep 4 04:27:06.930628 containerd[1569]: time="2025-09-04T04:27:06.930501125Z" level=info msg="connecting to shim b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f" address="unix:///run/containerd/s/de1af1b6d2f64a8681afeade7e50d4fc4a0afb0b3d534720512185a63993806b" protocol=ttrpc version=3 Sep 4 04:27:06.961007 systemd[1]: Started cri-containerd-b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f.scope - libcontainer container b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f. Sep 4 04:27:07.112731 containerd[1569]: time="2025-09-04T04:27:07.112682732Z" level=info msg="StartContainer for \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\" returns successfully" Sep 4 04:27:07.160991 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 04:27:07.161938 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:27:07.162456 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 4 04:27:07.167040 containerd[1569]: time="2025-09-04T04:27:07.165598142Z" level=info msg="received exit event container_id:\"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\" id:\"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\" pid:3258 exited_at:{seconds:1756960027 nanos:165203710}" Sep 4 04:27:07.167040 containerd[1569]: time="2025-09-04T04:27:07.165754216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\" id:\"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\" pid:3258 exited_at:{seconds:1756960027 nanos:165203710}" Sep 4 04:27:07.165927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 04:27:07.169793 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 4 04:27:07.170441 systemd[1]: cri-containerd-b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f.scope: Deactivated successfully. Sep 4 04:27:07.206406 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 04:27:07.897961 kubelet[2781]: E0904 04:27:07.897919 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:07.915320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount200474404.mount: Deactivated successfully. Sep 4 04:27:07.915484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f-rootfs.mount: Deactivated successfully. Sep 4 04:27:07.949688 containerd[1569]: time="2025-09-04T04:27:07.949624489Z" level=info msg="CreateContainer within sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 04:27:07.976024 containerd[1569]: time="2025-09-04T04:27:07.975944305Z" level=info msg="Container aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:07.998982 containerd[1569]: time="2025-09-04T04:27:07.998845423Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:07.999888 containerd[1569]: time="2025-09-04T04:27:07.999812051Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 4 04:27:07.999888 containerd[1569]: time="2025-09-04T04:27:07.999817972Z" level=info msg="CreateContainer within sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\"" Sep 4 04:27:08.000628 containerd[1569]: time="2025-09-04T04:27:08.000592308Z" level=info msg="StartContainer for \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\"" Sep 4 04:27:08.001713 containerd[1569]: time="2025-09-04T04:27:08.001650377Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 04:27:08.002392 containerd[1569]: time="2025-09-04T04:27:08.002365971Z" level=info msg="connecting to shim aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7" address="unix:///run/containerd/s/de1af1b6d2f64a8681afeade7e50d4fc4a0afb0b3d534720512185a63993806b" protocol=ttrpc version=3 Sep 4 04:27:08.002869 containerd[1569]: time="2025-09-04T04:27:08.002804425Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.585246295s" Sep 4 04:27:08.002869 containerd[1569]: time="2025-09-04T04:27:08.002838871Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 4 04:27:08.008878 containerd[1569]: time="2025-09-04T04:27:08.008609900Z" level=info msg="CreateContainer within sandbox \"4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 4 04:27:08.024796 containerd[1569]: time="2025-09-04T04:27:08.024737316Z" level=info msg="Container 69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:08.032279 containerd[1569]: time="2025-09-04T04:27:08.031848104Z" level=info msg="CreateContainer within sandbox \"4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\"" Sep 4 04:27:08.033392 containerd[1569]: time="2025-09-04T04:27:08.033348805Z" level=info msg="StartContainer for \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\"" Sep 4 04:27:08.035699 containerd[1569]: time="2025-09-04T04:27:08.035649660Z" level=info msg="connecting to shim 69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65" address="unix:///run/containerd/s/a3db845faa5a4c318abb40dec3c84f0a03e506e98770751e48510f258b431061" protocol=ttrpc version=3 Sep 4 04:27:08.040082 systemd[1]: Started cri-containerd-aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7.scope - libcontainer container aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7. Sep 4 04:27:08.070160 systemd[1]: Started cri-containerd-69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65.scope - libcontainer container 69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65. Sep 4 04:27:08.101444 systemd[1]: cri-containerd-aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7.scope: Deactivated successfully. Sep 4 04:27:08.103136 containerd[1569]: time="2025-09-04T04:27:08.102461927Z" level=info msg="StartContainer for \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\" returns successfully" Sep 4 04:27:08.105244 containerd[1569]: time="2025-09-04T04:27:08.105133530Z" level=info msg="received exit event container_id:\"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\" id:\"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\" pid:3325 exited_at:{seconds:1756960028 nanos:104799912}" Sep 4 04:27:08.105355 containerd[1569]: time="2025-09-04T04:27:08.105319038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\" id:\"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\" pid:3325 exited_at:{seconds:1756960028 nanos:104799912}" Sep 4 04:27:08.118758 containerd[1569]: time="2025-09-04T04:27:08.118566000Z" level=info msg="StartContainer for \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" returns successfully" Sep 4 04:27:08.906837 kubelet[2781]: E0904 04:27:08.906774 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:08.916677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7-rootfs.mount: Deactivated successfully. Sep 4 04:27:08.918273 kubelet[2781]: E0904 04:27:08.918193 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:08.929595 containerd[1569]: time="2025-09-04T04:27:08.929444672Z" level=info msg="CreateContainer within sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 04:27:08.975486 containerd[1569]: time="2025-09-04T04:27:08.975405402Z" level=info msg="Container b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:09.008036 containerd[1569]: time="2025-09-04T04:27:09.007125419Z" level=info msg="CreateContainer within sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\"" Sep 4 04:27:09.009186 containerd[1569]: time="2025-09-04T04:27:09.009133764Z" level=info msg="StartContainer for \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\"" Sep 4 04:27:09.014745 containerd[1569]: time="2025-09-04T04:27:09.014687912Z" level=info msg="connecting to shim b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb" address="unix:///run/containerd/s/de1af1b6d2f64a8681afeade7e50d4fc4a0afb0b3d534720512185a63993806b" protocol=ttrpc version=3 Sep 4 04:27:09.054910 kubelet[2781]: I0904 04:27:09.054816 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-p587m" podStartSLOduration=1.548189363 podStartE2EDuration="17.054790067s" podCreationTimestamp="2025-09-04 04:26:52 +0000 UTC" firstStartedPulling="2025-09-04 04:26:52.497336672 +0000 UTC m=+5.846656962" lastFinishedPulling="2025-09-04 04:27:08.003937376 +0000 UTC m=+21.353257666" observedRunningTime="2025-09-04 04:27:08.949681925 +0000 UTC m=+22.299002235" watchObservedRunningTime="2025-09-04 04:27:09.054790067 +0000 UTC m=+22.404110377" Sep 4 04:27:09.070167 systemd[1]: Started cri-containerd-b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb.scope - libcontainer container b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb. Sep 4 04:27:09.147119 systemd[1]: cri-containerd-b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb.scope: Deactivated successfully. Sep 4 04:27:09.149054 containerd[1569]: time="2025-09-04T04:27:09.149005829Z" level=info msg="StartContainer for \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\" returns successfully" Sep 4 04:27:09.151509 containerd[1569]: time="2025-09-04T04:27:09.151471865Z" level=info msg="received exit event container_id:\"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\" id:\"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\" pid:3394 exited_at:{seconds:1756960029 nanos:151106668}" Sep 4 04:27:09.152595 containerd[1569]: time="2025-09-04T04:27:09.152513352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\" id:\"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\" pid:3394 exited_at:{seconds:1756960029 nanos:151106668}" Sep 4 04:27:09.213231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb-rootfs.mount: Deactivated successfully. Sep 4 04:27:09.930912 kubelet[2781]: E0904 04:27:09.929373 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:09.930912 kubelet[2781]: E0904 04:27:09.930018 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:09.969436 containerd[1569]: time="2025-09-04T04:27:09.969174538Z" level=info msg="CreateContainer within sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 04:27:10.047980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931939677.mount: Deactivated successfully. Sep 4 04:27:10.066962 containerd[1569]: time="2025-09-04T04:27:10.066784481Z" level=info msg="Container 603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:10.074002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040509492.mount: Deactivated successfully. Sep 4 04:27:10.145813 containerd[1569]: time="2025-09-04T04:27:10.145603432Z" level=info msg="CreateContainer within sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\"" Sep 4 04:27:10.149412 containerd[1569]: time="2025-09-04T04:27:10.147759495Z" level=info msg="StartContainer for \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\"" Sep 4 04:27:10.152940 containerd[1569]: time="2025-09-04T04:27:10.152786511Z" level=info msg="connecting to shim 603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23" address="unix:///run/containerd/s/de1af1b6d2f64a8681afeade7e50d4fc4a0afb0b3d534720512185a63993806b" protocol=ttrpc version=3 Sep 4 04:27:10.227206 systemd[1]: Started cri-containerd-603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23.scope - libcontainer container 603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23. Sep 4 04:27:10.420509 containerd[1569]: time="2025-09-04T04:27:10.420445753Z" level=info msg="StartContainer for \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" returns successfully" Sep 4 04:27:10.671423 containerd[1569]: time="2025-09-04T04:27:10.668778867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" id:\"3fad6665b86eb3f87e1295d339b26d42e0e9ed660f5c4d637da2fa260c4e281b\" pid:3459 exited_at:{seconds:1756960030 nanos:668096414}" Sep 4 04:27:10.840893 kubelet[2781]: I0904 04:27:10.840476 2781 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 4 04:27:10.982907 kubelet[2781]: E0904 04:27:10.981439 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:10.998007 systemd[1]: Created slice kubepods-burstable-podb7066e89_fa02_46bc_99da_ef6c5c3ee646.slice - libcontainer container kubepods-burstable-podb7066e89_fa02_46bc_99da_ef6c5c3ee646.slice. Sep 4 04:27:11.027659 kubelet[2781]: I0904 04:27:11.027579 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7066e89-fa02-46bc-99da-ef6c5c3ee646-config-volume\") pod \"coredns-674b8bbfcf-d9z2g\" (UID: \"b7066e89-fa02-46bc-99da-ef6c5c3ee646\") " pod="kube-system/coredns-674b8bbfcf-d9z2g" Sep 4 04:27:11.027906 kubelet[2781]: I0904 04:27:11.027696 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6xng\" (UniqueName: \"kubernetes.io/projected/f00e8921-dca3-43cf-99e4-22ad80525331-kube-api-access-g6xng\") pod \"coredns-674b8bbfcf-qqztq\" (UID: \"f00e8921-dca3-43cf-99e4-22ad80525331\") " pod="kube-system/coredns-674b8bbfcf-qqztq" Sep 4 04:27:11.027906 kubelet[2781]: I0904 04:27:11.027728 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f00e8921-dca3-43cf-99e4-22ad80525331-config-volume\") pod \"coredns-674b8bbfcf-qqztq\" (UID: \"f00e8921-dca3-43cf-99e4-22ad80525331\") " pod="kube-system/coredns-674b8bbfcf-qqztq" Sep 4 04:27:11.027906 kubelet[2781]: I0904 04:27:11.027769 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs9g8\" (UniqueName: \"kubernetes.io/projected/b7066e89-fa02-46bc-99da-ef6c5c3ee646-kube-api-access-xs9g8\") pod \"coredns-674b8bbfcf-d9z2g\" (UID: \"b7066e89-fa02-46bc-99da-ef6c5c3ee646\") " pod="kube-system/coredns-674b8bbfcf-d9z2g" Sep 4 04:27:11.037673 systemd[1]: Created slice kubepods-burstable-podf00e8921_dca3_43cf_99e4_22ad80525331.slice - libcontainer container kubepods-burstable-podf00e8921_dca3_43cf_99e4_22ad80525331.slice. Sep 4 04:27:11.050287 kubelet[2781]: I0904 04:27:11.049903 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s2n8f" podStartSLOduration=7.00957947 podStartE2EDuration="20.049874103s" podCreationTimestamp="2025-09-04 04:26:51 +0000 UTC" firstStartedPulling="2025-09-04 04:26:52.377061096 +0000 UTC m=+5.726381386" lastFinishedPulling="2025-09-04 04:27:05.417355719 +0000 UTC m=+18.766676019" observedRunningTime="2025-09-04 04:27:11.049794123 +0000 UTC m=+24.399114433" watchObservedRunningTime="2025-09-04 04:27:11.049874103 +0000 UTC m=+24.399194403" Sep 4 04:27:11.312760 kubelet[2781]: E0904 04:27:11.311808 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:11.314289 containerd[1569]: time="2025-09-04T04:27:11.314212535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d9z2g,Uid:b7066e89-fa02-46bc-99da-ef6c5c3ee646,Namespace:kube-system,Attempt:0,}" Sep 4 04:27:11.343574 kubelet[2781]: E0904 04:27:11.342286 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:11.352721 containerd[1569]: time="2025-09-04T04:27:11.352622577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qqztq,Uid:f00e8921-dca3-43cf-99e4-22ad80525331,Namespace:kube-system,Attempt:0,}" Sep 4 04:27:11.947171 kubelet[2781]: E0904 04:27:11.947107 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:12.950889 kubelet[2781]: E0904 04:27:12.950069 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:13.683225 systemd-networkd[1493]: cilium_host: Link UP Sep 4 04:27:13.683745 systemd-networkd[1493]: cilium_net: Link UP Sep 4 04:27:13.684416 systemd-networkd[1493]: cilium_net: Gained carrier Sep 4 04:27:13.684652 systemd-networkd[1493]: cilium_host: Gained carrier Sep 4 04:27:13.804044 systemd-networkd[1493]: cilium_vxlan: Link UP Sep 4 04:27:13.804225 systemd-networkd[1493]: cilium_vxlan: Gained carrier Sep 4 04:27:13.849111 systemd-networkd[1493]: cilium_host: Gained IPv6LL Sep 4 04:27:13.921109 systemd-networkd[1493]: cilium_net: Gained IPv6LL Sep 4 04:27:14.030900 kernel: NET: Registered PF_ALG protocol family Sep 4 04:27:14.796936 systemd-networkd[1493]: lxc_health: Link UP Sep 4 04:27:14.804973 systemd-networkd[1493]: lxc_health: Gained carrier Sep 4 04:27:14.955993 systemd-networkd[1493]: lxc12de746d128c: Link UP Sep 4 04:27:14.968892 kernel: eth0: renamed from tmp34333 Sep 4 04:27:14.972605 systemd-networkd[1493]: lxc12de746d128c: Gained carrier Sep 4 04:27:14.978644 systemd-networkd[1493]: lxca632c4c313d9: Link UP Sep 4 04:27:14.983339 kernel: eth0: renamed from tmpab01c Sep 4 04:27:14.984297 systemd-networkd[1493]: lxca632c4c313d9: Gained carrier Sep 4 04:27:14.994082 systemd-networkd[1493]: cilium_vxlan: Gained IPv6LL Sep 4 04:27:16.017188 systemd-networkd[1493]: lxc12de746d128c: Gained IPv6LL Sep 4 04:27:16.197193 kubelet[2781]: E0904 04:27:16.197152 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:16.529094 systemd-networkd[1493]: lxc_health: Gained IPv6LL Sep 4 04:27:16.722078 systemd-networkd[1493]: lxca632c4c313d9: Gained IPv6LL Sep 4 04:27:16.959004 kubelet[2781]: E0904 04:27:16.958827 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:17.961160 kubelet[2781]: E0904 04:27:17.961116 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:18.771209 containerd[1569]: time="2025-09-04T04:27:18.771139120Z" level=info msg="connecting to shim ab01c86fd94acf09cccb47e8c1329af89495a816d2873cbe8d6bc47174cebfbf" address="unix:///run/containerd/s/6042cdfc5ff1a057988843e3885c70d1f7108a114b0eeb37f519cadf2fda2095" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:27:18.773494 containerd[1569]: time="2025-09-04T04:27:18.773370028Z" level=info msg="connecting to shim 3433310cc1f02237b6bc2b40dcfea0304121f42148e40aacbd9c2679772e5bc4" address="unix:///run/containerd/s/777c1872f0eae3eade79af3aedd3c8d33f8aec3676a88c6f21e99ae0124c22f6" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:27:18.815606 systemd[1]: Started cri-containerd-ab01c86fd94acf09cccb47e8c1329af89495a816d2873cbe8d6bc47174cebfbf.scope - libcontainer container ab01c86fd94acf09cccb47e8c1329af89495a816d2873cbe8d6bc47174cebfbf. Sep 4 04:27:18.821807 systemd[1]: Started cri-containerd-3433310cc1f02237b6bc2b40dcfea0304121f42148e40aacbd9c2679772e5bc4.scope - libcontainer container 3433310cc1f02237b6bc2b40dcfea0304121f42148e40aacbd9c2679772e5bc4. Sep 4 04:27:18.836093 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:27:18.840718 systemd-resolved[1409]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 04:27:18.877361 containerd[1569]: time="2025-09-04T04:27:18.877274544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qqztq,Uid:f00e8921-dca3-43cf-99e4-22ad80525331,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab01c86fd94acf09cccb47e8c1329af89495a816d2873cbe8d6bc47174cebfbf\"" Sep 4 04:27:18.883975 kubelet[2781]: E0904 04:27:18.883157 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:18.891965 containerd[1569]: time="2025-09-04T04:27:18.891894063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d9z2g,Uid:b7066e89-fa02-46bc-99da-ef6c5c3ee646,Namespace:kube-system,Attempt:0,} returns sandbox id \"3433310cc1f02237b6bc2b40dcfea0304121f42148e40aacbd9c2679772e5bc4\"" Sep 4 04:27:18.892960 kubelet[2781]: E0904 04:27:18.892915 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:18.918503 containerd[1569]: time="2025-09-04T04:27:18.917288371Z" level=info msg="CreateContainer within sandbox \"ab01c86fd94acf09cccb47e8c1329af89495a816d2873cbe8d6bc47174cebfbf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 04:27:18.928034 containerd[1569]: time="2025-09-04T04:27:18.927969867Z" level=info msg="CreateContainer within sandbox \"3433310cc1f02237b6bc2b40dcfea0304121f42148e40aacbd9c2679772e5bc4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 04:27:18.981274 containerd[1569]: time="2025-09-04T04:27:18.981218864Z" level=info msg="Container 4a0c5aebff374994b958ee65fc3b0ba167a3d31c8174204f60c7c22da3c99d40: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:18.982192 containerd[1569]: time="2025-09-04T04:27:18.982164840Z" level=info msg="Container f3bd8446159bc24c9a6ec15ed9a635d533caf4d0dd963f34e6dda8fa8bfb2764: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:27:19.001269 containerd[1569]: time="2025-09-04T04:27:19.001216550Z" level=info msg="CreateContainer within sandbox \"ab01c86fd94acf09cccb47e8c1329af89495a816d2873cbe8d6bc47174cebfbf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a0c5aebff374994b958ee65fc3b0ba167a3d31c8174204f60c7c22da3c99d40\"" Sep 4 04:27:19.002191 containerd[1569]: time="2025-09-04T04:27:19.002150414Z" level=info msg="CreateContainer within sandbox \"3433310cc1f02237b6bc2b40dcfea0304121f42148e40aacbd9c2679772e5bc4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3bd8446159bc24c9a6ec15ed9a635d533caf4d0dd963f34e6dda8fa8bfb2764\"" Sep 4 04:27:19.002337 containerd[1569]: time="2025-09-04T04:27:19.002288293Z" level=info msg="StartContainer for \"4a0c5aebff374994b958ee65fc3b0ba167a3d31c8174204f60c7c22da3c99d40\"" Sep 4 04:27:19.003475 containerd[1569]: time="2025-09-04T04:27:19.003441257Z" level=info msg="StartContainer for \"f3bd8446159bc24c9a6ec15ed9a635d533caf4d0dd963f34e6dda8fa8bfb2764\"" Sep 4 04:27:19.003585 containerd[1569]: time="2025-09-04T04:27:19.003447599Z" level=info msg="connecting to shim 4a0c5aebff374994b958ee65fc3b0ba167a3d31c8174204f60c7c22da3c99d40" address="unix:///run/containerd/s/6042cdfc5ff1a057988843e3885c70d1f7108a114b0eeb37f519cadf2fda2095" protocol=ttrpc version=3 Sep 4 04:27:19.004435 containerd[1569]: time="2025-09-04T04:27:19.004399787Z" level=info msg="connecting to shim f3bd8446159bc24c9a6ec15ed9a635d533caf4d0dd963f34e6dda8fa8bfb2764" address="unix:///run/containerd/s/777c1872f0eae3eade79af3aedd3c8d33f8aec3676a88c6f21e99ae0124c22f6" protocol=ttrpc version=3 Sep 4 04:27:19.012221 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:39656.service - OpenSSH per-connection server daemon (10.0.0.1:39656). Sep 4 04:27:19.025467 systemd[1]: Started cri-containerd-f3bd8446159bc24c9a6ec15ed9a635d533caf4d0dd963f34e6dda8fa8bfb2764.scope - libcontainer container f3bd8446159bc24c9a6ec15ed9a635d533caf4d0dd963f34e6dda8fa8bfb2764. Sep 4 04:27:19.030086 systemd[1]: Started cri-containerd-4a0c5aebff374994b958ee65fc3b0ba167a3d31c8174204f60c7c22da3c99d40.scope - libcontainer container 4a0c5aebff374994b958ee65fc3b0ba167a3d31c8174204f60c7c22da3c99d40. Sep 4 04:27:19.073529 containerd[1569]: time="2025-09-04T04:27:19.073490847Z" level=info msg="StartContainer for \"f3bd8446159bc24c9a6ec15ed9a635d533caf4d0dd963f34e6dda8fa8bfb2764\" returns successfully" Sep 4 04:27:19.073752 containerd[1569]: time="2025-09-04T04:27:19.073595654Z" level=info msg="StartContainer for \"4a0c5aebff374994b958ee65fc3b0ba167a3d31c8174204f60c7c22da3c99d40\" returns successfully" Sep 4 04:27:19.073821 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 39656 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:19.076382 sshd-session[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:19.084159 systemd-logind[1541]: New session 10 of user core. Sep 4 04:27:19.091049 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 04:27:19.253182 sshd[4091]: Connection closed by 10.0.0.1 port 39656 Sep 4 04:27:19.253538 sshd-session[4042]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:19.259406 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:39656.service: Deactivated successfully. Sep 4 04:27:19.261961 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 04:27:19.262917 systemd-logind[1541]: Session 10 logged out. Waiting for processes to exit. Sep 4 04:27:19.264505 systemd-logind[1541]: Removed session 10. Sep 4 04:27:19.759325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1214120832.mount: Deactivated successfully. Sep 4 04:27:19.976619 kubelet[2781]: E0904 04:27:19.976412 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:19.980598 kubelet[2781]: E0904 04:27:19.980444 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:20.002887 kubelet[2781]: I0904 04:27:20.002239 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d9z2g" podStartSLOduration=28.002212947 podStartE2EDuration="28.002212947s" podCreationTimestamp="2025-09-04 04:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:27:19.988921564 +0000 UTC m=+33.338241884" watchObservedRunningTime="2025-09-04 04:27:20.002212947 +0000 UTC m=+33.351533237" Sep 4 04:27:20.982618 kubelet[2781]: E0904 04:27:20.982571 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:20.983225 kubelet[2781]: E0904 04:27:20.982670 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:21.984497 kubelet[2781]: E0904 04:27:21.984438 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:27:24.271423 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:38664.service - OpenSSH per-connection server daemon (10.0.0.1:38664). Sep 4 04:27:24.361392 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 38664 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:24.363246 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:24.368734 systemd-logind[1541]: New session 11 of user core. Sep 4 04:27:24.380152 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 04:27:24.524713 sshd[4132]: Connection closed by 10.0.0.1 port 38664 Sep 4 04:27:24.525317 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:24.530329 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:38664.service: Deactivated successfully. Sep 4 04:27:24.533135 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 04:27:24.534125 systemd-logind[1541]: Session 11 logged out. Waiting for processes to exit. Sep 4 04:27:24.535918 systemd-logind[1541]: Removed session 11. Sep 4 04:27:29.543586 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:38732.service - OpenSSH per-connection server daemon (10.0.0.1:38732). Sep 4 04:27:29.723163 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 38732 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:29.724788 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:29.730535 systemd-logind[1541]: New session 12 of user core. Sep 4 04:27:29.744222 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 04:27:29.876587 sshd[4152]: Connection closed by 10.0.0.1 port 38732 Sep 4 04:27:29.876836 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:29.881458 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:38732.service: Deactivated successfully. Sep 4 04:27:29.883664 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 04:27:29.884535 systemd-logind[1541]: Session 12 logged out. Waiting for processes to exit. Sep 4 04:27:29.886052 systemd-logind[1541]: Removed session 12. Sep 4 04:27:34.904116 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:49238.service - OpenSSH per-connection server daemon (10.0.0.1:49238). Sep 4 04:27:34.958843 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 49238 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:34.960762 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:34.966138 systemd-logind[1541]: New session 13 of user core. Sep 4 04:27:34.976101 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 04:27:35.110474 sshd[4169]: Connection closed by 10.0.0.1 port 49238 Sep 4 04:27:35.110980 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:35.116595 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:49238.service: Deactivated successfully. Sep 4 04:27:35.119330 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 04:27:35.120433 systemd-logind[1541]: Session 13 logged out. Waiting for processes to exit. Sep 4 04:27:35.122053 systemd-logind[1541]: Removed session 13. Sep 4 04:27:40.125763 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:37156.service - OpenSSH per-connection server daemon (10.0.0.1:37156). Sep 4 04:27:40.173204 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 37156 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:40.175369 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:40.181141 systemd-logind[1541]: New session 14 of user core. Sep 4 04:27:40.190118 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 04:27:40.312318 sshd[4186]: Connection closed by 10.0.0.1 port 37156 Sep 4 04:27:40.312746 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:40.326293 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:37156.service: Deactivated successfully. Sep 4 04:27:40.328670 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 04:27:40.329525 systemd-logind[1541]: Session 14 logged out. Waiting for processes to exit. Sep 4 04:27:40.332922 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:37168.service - OpenSSH per-connection server daemon (10.0.0.1:37168). Sep 4 04:27:40.333673 systemd-logind[1541]: Removed session 14. Sep 4 04:27:40.390120 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 37168 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:40.391922 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:40.397471 systemd-logind[1541]: New session 15 of user core. Sep 4 04:27:40.407021 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 04:27:40.574473 sshd[4203]: Connection closed by 10.0.0.1 port 37168 Sep 4 04:27:40.574914 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:40.590461 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:37168.service: Deactivated successfully. Sep 4 04:27:40.594880 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 04:27:40.596903 systemd-logind[1541]: Session 15 logged out. Waiting for processes to exit. Sep 4 04:27:40.600330 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:37184.service - OpenSSH per-connection server daemon (10.0.0.1:37184). Sep 4 04:27:40.601250 systemd-logind[1541]: Removed session 15. Sep 4 04:27:40.652728 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 37184 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:40.654215 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:40.658959 systemd-logind[1541]: New session 16 of user core. Sep 4 04:27:40.680274 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 04:27:40.798353 sshd[4218]: Connection closed by 10.0.0.1 port 37184 Sep 4 04:27:40.798706 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:40.803764 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:37184.service: Deactivated successfully. Sep 4 04:27:40.805883 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 04:27:40.806678 systemd-logind[1541]: Session 16 logged out. Waiting for processes to exit. Sep 4 04:27:40.808421 systemd-logind[1541]: Removed session 16. Sep 4 04:27:45.818265 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:37194.service - OpenSSH per-connection server daemon (10.0.0.1:37194). Sep 4 04:27:45.875632 sshd[4232]: Accepted publickey for core from 10.0.0.1 port 37194 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:45.877422 sshd-session[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:45.882557 systemd-logind[1541]: New session 17 of user core. Sep 4 04:27:45.891013 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 04:27:46.037565 sshd[4235]: Connection closed by 10.0.0.1 port 37194 Sep 4 04:27:46.038010 sshd-session[4232]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:46.042547 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:37194.service: Deactivated successfully. Sep 4 04:27:46.044946 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 04:27:46.046014 systemd-logind[1541]: Session 17 logged out. Waiting for processes to exit. Sep 4 04:27:46.048021 systemd-logind[1541]: Removed session 17. Sep 4 04:27:51.050128 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:41878.service - OpenSSH per-connection server daemon (10.0.0.1:41878). Sep 4 04:27:51.110610 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 41878 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:51.112567 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:51.119128 systemd-logind[1541]: New session 18 of user core. Sep 4 04:27:51.130144 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 04:27:51.281434 sshd[4253]: Connection closed by 10.0.0.1 port 41878 Sep 4 04:27:51.282135 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:51.295645 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:41878.service: Deactivated successfully. Sep 4 04:27:51.298697 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 04:27:51.300158 systemd-logind[1541]: Session 18 logged out. Waiting for processes to exit. Sep 4 04:27:51.306106 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:41894.service - OpenSSH per-connection server daemon (10.0.0.1:41894). Sep 4 04:27:51.307448 systemd-logind[1541]: Removed session 18. Sep 4 04:27:51.384400 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 41894 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:51.387713 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:51.399975 systemd-logind[1541]: New session 19 of user core. Sep 4 04:27:51.419167 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 04:27:51.799070 sshd[4270]: Connection closed by 10.0.0.1 port 41894 Sep 4 04:27:51.799841 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:51.821108 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:41894.service: Deactivated successfully. Sep 4 04:27:51.825458 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 04:27:51.827001 systemd-logind[1541]: Session 19 logged out. Waiting for processes to exit. Sep 4 04:27:51.836377 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:41898.service - OpenSSH per-connection server daemon (10.0.0.1:41898). Sep 4 04:27:51.837522 systemd-logind[1541]: Removed session 19. Sep 4 04:27:51.919020 sshd[4282]: Accepted publickey for core from 10.0.0.1 port 41898 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:51.921881 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:51.929750 systemd-logind[1541]: New session 20 of user core. Sep 4 04:27:51.941256 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 04:27:52.569500 sshd[4285]: Connection closed by 10.0.0.1 port 41898 Sep 4 04:27:52.569916 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:52.587614 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:41898.service: Deactivated successfully. Sep 4 04:27:52.590360 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 04:27:52.591756 systemd-logind[1541]: Session 20 logged out. Waiting for processes to exit. Sep 4 04:27:52.597223 systemd[1]: Started sshd@20-10.0.0.124:22-10.0.0.1:41900.service - OpenSSH per-connection server daemon (10.0.0.1:41900). Sep 4 04:27:52.599000 systemd-logind[1541]: Removed session 20. Sep 4 04:27:52.661096 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 41900 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:52.663068 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:52.668356 systemd-logind[1541]: New session 21 of user core. Sep 4 04:27:52.679156 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 04:27:53.203004 sshd[4310]: Connection closed by 10.0.0.1 port 41900 Sep 4 04:27:53.203496 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:53.219677 systemd[1]: sshd@20-10.0.0.124:22-10.0.0.1:41900.service: Deactivated successfully. Sep 4 04:27:53.222231 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 04:27:53.224403 systemd-logind[1541]: Session 21 logged out. Waiting for processes to exit. Sep 4 04:27:53.226675 systemd[1]: Started sshd@21-10.0.0.124:22-10.0.0.1:41908.service - OpenSSH per-connection server daemon (10.0.0.1:41908). Sep 4 04:27:53.228333 systemd-logind[1541]: Removed session 21. Sep 4 04:27:53.279557 sshd[4321]: Accepted publickey for core from 10.0.0.1 port 41908 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:53.281364 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:53.286519 systemd-logind[1541]: New session 22 of user core. Sep 4 04:27:53.302159 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 04:27:53.477588 sshd[4324]: Connection closed by 10.0.0.1 port 41908 Sep 4 04:27:53.478697 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:53.484902 systemd[1]: sshd@21-10.0.0.124:22-10.0.0.1:41908.service: Deactivated successfully. Sep 4 04:27:53.487874 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 04:27:53.489136 systemd-logind[1541]: Session 22 logged out. Waiting for processes to exit. Sep 4 04:27:53.491214 systemd-logind[1541]: Removed session 22. Sep 4 04:27:58.504331 systemd[1]: Started sshd@22-10.0.0.124:22-10.0.0.1:41914.service - OpenSSH per-connection server daemon (10.0.0.1:41914). Sep 4 04:27:58.559629 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 41914 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:27:58.561247 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:27:58.565796 systemd-logind[1541]: New session 23 of user core. Sep 4 04:27:58.576003 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 04:27:58.701849 sshd[4340]: Connection closed by 10.0.0.1 port 41914 Sep 4 04:27:58.702275 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Sep 4 04:27:58.706953 systemd[1]: sshd@22-10.0.0.124:22-10.0.0.1:41914.service: Deactivated successfully. Sep 4 04:27:58.709252 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 04:27:58.710214 systemd-logind[1541]: Session 23 logged out. Waiting for processes to exit. Sep 4 04:27:58.712007 systemd-logind[1541]: Removed session 23. Sep 4 04:28:02.752097 kubelet[2781]: E0904 04:28:02.751959 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:03.717035 systemd[1]: Started sshd@23-10.0.0.124:22-10.0.0.1:43134.service - OpenSSH per-connection server daemon (10.0.0.1:43134). Sep 4 04:28:03.773211 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 43134 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:28:03.775174 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:03.780080 systemd-logind[1541]: New session 24 of user core. Sep 4 04:28:03.791155 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 04:28:03.920442 sshd[4358]: Connection closed by 10.0.0.1 port 43134 Sep 4 04:28:03.920904 sshd-session[4355]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:03.926885 systemd[1]: sshd@23-10.0.0.124:22-10.0.0.1:43134.service: Deactivated successfully. Sep 4 04:28:03.929689 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 04:28:03.930817 systemd-logind[1541]: Session 24 logged out. Waiting for processes to exit. Sep 4 04:28:03.932608 systemd-logind[1541]: Removed session 24. Sep 4 04:28:08.934905 systemd[1]: Started sshd@24-10.0.0.124:22-10.0.0.1:43148.service - OpenSSH per-connection server daemon (10.0.0.1:43148). Sep 4 04:28:09.009622 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 43148 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:28:09.011894 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:09.018402 systemd-logind[1541]: New session 25 of user core. Sep 4 04:28:09.027262 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 04:28:09.166497 sshd[4375]: Connection closed by 10.0.0.1 port 43148 Sep 4 04:28:09.166994 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:09.187035 systemd[1]: sshd@24-10.0.0.124:22-10.0.0.1:43148.service: Deactivated successfully. Sep 4 04:28:09.189481 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 04:28:09.190327 systemd-logind[1541]: Session 25 logged out. Waiting for processes to exit. Sep 4 04:28:09.193793 systemd[1]: Started sshd@25-10.0.0.124:22-10.0.0.1:43150.service - OpenSSH per-connection server daemon (10.0.0.1:43150). Sep 4 04:28:09.194505 systemd-logind[1541]: Removed session 25. Sep 4 04:28:09.255838 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 43150 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:28:09.257715 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:09.263132 systemd-logind[1541]: New session 26 of user core. Sep 4 04:28:09.276388 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 04:28:10.689224 kubelet[2781]: I0904 04:28:10.688804 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qqztq" podStartSLOduration=78.688780444 podStartE2EDuration="1m18.688780444s" podCreationTimestamp="2025-09-04 04:26:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:27:20.019009457 +0000 UTC m=+33.368329747" watchObservedRunningTime="2025-09-04 04:28:10.688780444 +0000 UTC m=+84.038100734" Sep 4 04:28:10.700368 containerd[1569]: time="2025-09-04T04:28:10.700285040Z" level=info msg="StopContainer for \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" with timeout 30 (s)" Sep 4 04:28:10.709026 containerd[1569]: time="2025-09-04T04:28:10.708963308Z" level=info msg="Stop container \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" with signal terminated" Sep 4 04:28:10.728615 systemd[1]: cri-containerd-69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65.scope: Deactivated successfully. Sep 4 04:28:10.731007 containerd[1569]: time="2025-09-04T04:28:10.730951806Z" level=info msg="TaskExit event in podsandbox handler container_id:\"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" id:\"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" pid:3340 exited_at:{seconds:1756960090 nanos:730285061}" Sep 4 04:28:10.731113 containerd[1569]: time="2025-09-04T04:28:10.730997612Z" level=info msg="received exit event container_id:\"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" id:\"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" pid:3340 exited_at:{seconds:1756960090 nanos:730285061}" Sep 4 04:28:10.741370 containerd[1569]: time="2025-09-04T04:28:10.741268630Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 04:28:10.744842 containerd[1569]: time="2025-09-04T04:28:10.744780380Z" level=info msg="TaskExit event in podsandbox handler container_id:\"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" id:\"a7c820f556652ee3951a7be244abef81c42bd7887cfa3a13942fd5e219f5b16f\" pid:4412 exited_at:{seconds:1756960090 nanos:744342349}" Sep 4 04:28:10.747718 containerd[1569]: time="2025-09-04T04:28:10.747659560Z" level=info msg="StopContainer for \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" with timeout 2 (s)" Sep 4 04:28:10.748122 containerd[1569]: time="2025-09-04T04:28:10.748074346Z" level=info msg="Stop container \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" with signal terminated" Sep 4 04:28:10.762537 systemd-networkd[1493]: lxc_health: Link DOWN Sep 4 04:28:10.762551 systemd-networkd[1493]: lxc_health: Lost carrier Sep 4 04:28:10.765213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65-rootfs.mount: Deactivated successfully. Sep 4 04:28:10.782355 systemd[1]: cri-containerd-603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23.scope: Deactivated successfully. Sep 4 04:28:10.782882 systemd[1]: cri-containerd-603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23.scope: Consumed 7.736s CPU time, 129M memory peak, 216K read from disk, 13.3M written to disk. Sep 4 04:28:10.784123 containerd[1569]: time="2025-09-04T04:28:10.784042645Z" level=info msg="received exit event container_id:\"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" id:\"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" pid:3430 exited_at:{seconds:1756960090 nanos:783168589}" Sep 4 04:28:10.784412 containerd[1569]: time="2025-09-04T04:28:10.784381018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" id:\"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" pid:3430 exited_at:{seconds:1756960090 nanos:783168589}" Sep 4 04:28:10.798301 containerd[1569]: time="2025-09-04T04:28:10.798210624Z" level=info msg="StopContainer for \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" returns successfully" Sep 4 04:28:10.802153 containerd[1569]: time="2025-09-04T04:28:10.802091323Z" level=info msg="StopPodSandbox for \"4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79\"" Sep 4 04:28:10.811239 containerd[1569]: time="2025-09-04T04:28:10.811117920Z" level=info msg="Container to stop \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 04:28:10.811665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23-rootfs.mount: Deactivated successfully. Sep 4 04:28:10.821840 systemd[1]: cri-containerd-4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79.scope: Deactivated successfully. Sep 4 04:28:10.828206 containerd[1569]: time="2025-09-04T04:28:10.827993122Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79\" id:\"4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79\" pid:3003 exit_status:137 exited_at:{seconds:1756960090 nanos:827505047}" Sep 4 04:28:10.840789 containerd[1569]: time="2025-09-04T04:28:10.840733833Z" level=info msg="StopContainer for \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" returns successfully" Sep 4 04:28:10.841957 containerd[1569]: time="2025-09-04T04:28:10.841917238Z" level=info msg="StopPodSandbox for \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\"" Sep 4 04:28:10.842157 containerd[1569]: time="2025-09-04T04:28:10.842104874Z" level=info msg="Container to stop \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 04:28:10.842157 containerd[1569]: time="2025-09-04T04:28:10.842140510Z" level=info msg="Container to stop \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 04:28:10.842157 containerd[1569]: time="2025-09-04T04:28:10.842158114Z" level=info msg="Container to stop \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 04:28:10.842157 containerd[1569]: time="2025-09-04T04:28:10.842171610Z" level=info msg="Container to stop \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 04:28:10.842455 containerd[1569]: time="2025-09-04T04:28:10.842186438Z" level=info msg="Container to stop \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 04:28:10.853004 systemd[1]: cri-containerd-b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca.scope: Deactivated successfully. Sep 4 04:28:10.868356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79-rootfs.mount: Deactivated successfully. Sep 4 04:28:10.876474 containerd[1569]: time="2025-09-04T04:28:10.876426038Z" level=info msg="shim disconnected" id=4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79 namespace=k8s.io Sep 4 04:28:10.876970 containerd[1569]: time="2025-09-04T04:28:10.876616641Z" level=warning msg="cleaning up after shim disconnected" id=4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79 namespace=k8s.io Sep 4 04:28:10.888137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca-rootfs.mount: Deactivated successfully. Sep 4 04:28:10.896720 containerd[1569]: time="2025-09-04T04:28:10.876630427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 04:28:10.896970 containerd[1569]: time="2025-09-04T04:28:10.892945957Z" level=info msg="shim disconnected" id=b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca namespace=k8s.io Sep 4 04:28:10.896970 containerd[1569]: time="2025-09-04T04:28:10.896890227Z" level=warning msg="cleaning up after shim disconnected" id=b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca namespace=k8s.io Sep 4 04:28:10.896970 containerd[1569]: time="2025-09-04T04:28:10.896907640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 04:28:10.937489 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca-shm.mount: Deactivated successfully. Sep 4 04:28:10.940048 containerd[1569]: time="2025-09-04T04:28:10.939600030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" id:\"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" pid:2936 exit_status:137 exited_at:{seconds:1756960090 nanos:858237527}" Sep 4 04:28:10.940351 containerd[1569]: time="2025-09-04T04:28:10.940328472Z" level=info msg="TearDown network for sandbox \"4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79\" successfully" Sep 4 04:28:10.940444 containerd[1569]: time="2025-09-04T04:28:10.940425346Z" level=info msg="StopPodSandbox for \"4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79\" returns successfully" Sep 4 04:28:10.945938 containerd[1569]: time="2025-09-04T04:28:10.945379601Z" level=info msg="TearDown network for sandbox \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" successfully" Sep 4 04:28:10.945938 containerd[1569]: time="2025-09-04T04:28:10.945452819Z" level=info msg="StopPodSandbox for \"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" returns successfully" Sep 4 04:28:10.947873 containerd[1569]: time="2025-09-04T04:28:10.947804519Z" level=info msg="received exit event sandbox_id:\"4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79\" exit_status:137 exited_at:{seconds:1756960090 nanos:827505047}" Sep 4 04:28:10.948282 containerd[1569]: time="2025-09-04T04:28:10.948251917Z" level=info msg="received exit event sandbox_id:\"b9072e739f09e1bf8d3a193477f35b477598fdfdfd11e784d0e55c55b2de4dca\" exit_status:137 exited_at:{seconds:1756960090 nanos:858237527}" Sep 4 04:28:11.027780 kubelet[2781]: I0904 04:28:11.027676 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cilium-run\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.027780 kubelet[2781]: I0904 04:28:11.027750 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/272d1165-2638-4a13-9d03-5f65b1025287-clustermesh-secrets\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.027780 kubelet[2781]: I0904 04:28:11.027776 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/272d1165-2638-4a13-9d03-5f65b1025287-hubble-tls\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.027780 kubelet[2781]: I0904 04:28:11.027794 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-etc-cni-netd\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.027780 kubelet[2781]: I0904 04:28:11.027814 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmt6l\" (UniqueName: \"kubernetes.io/projected/272d1165-2638-4a13-9d03-5f65b1025287-kube-api-access-zmt6l\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.028234 kubelet[2781]: I0904 04:28:11.027841 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cilium-cgroup\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.028234 kubelet[2781]: I0904 04:28:11.027891 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-xtables-lock\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.028234 kubelet[2781]: I0904 04:28:11.027908 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-host-proc-sys-net\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.028234 kubelet[2781]: I0904 04:28:11.027884 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 04:28:11.028234 kubelet[2781]: I0904 04:28:11.027949 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-hostproc" (OuterVolumeSpecName: "hostproc") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 04:28:11.028392 kubelet[2781]: I0904 04:28:11.027974 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 04:28:11.028392 kubelet[2781]: I0904 04:28:11.027924 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-hostproc\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.028392 kubelet[2781]: I0904 04:28:11.028052 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/272d1165-2638-4a13-9d03-5f65b1025287-cilium-config-path\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.028392 kubelet[2781]: I0904 04:28:11.028076 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-host-proc-sys-kernel\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.028392 kubelet[2781]: I0904 04:28:11.028103 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-lib-modules\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.028392 kubelet[2781]: I0904 04:28:11.028131 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/332e4948-c0e8-4698-8cb1-4a68650a04f3-cilium-config-path\") pod \"332e4948-c0e8-4698-8cb1-4a68650a04f3\" (UID: \"332e4948-c0e8-4698-8cb1-4a68650a04f3\") " Sep 4 04:28:11.028543 kubelet[2781]: I0904 04:28:11.028162 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxmqm\" (UniqueName: \"kubernetes.io/projected/332e4948-c0e8-4698-8cb1-4a68650a04f3-kube-api-access-fxmqm\") pod \"332e4948-c0e8-4698-8cb1-4a68650a04f3\" (UID: \"332e4948-c0e8-4698-8cb1-4a68650a04f3\") " Sep 4 04:28:11.028543 kubelet[2781]: I0904 04:28:11.028187 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cni-path\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.028543 kubelet[2781]: I0904 04:28:11.028206 2781 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-bpf-maps\") pod \"272d1165-2638-4a13-9d03-5f65b1025287\" (UID: \"272d1165-2638-4a13-9d03-5f65b1025287\") " Sep 4 04:28:11.028543 kubelet[2781]: I0904 04:28:11.028295 2781 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.028543 kubelet[2781]: I0904 04:28:11.028309 2781 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.028543 kubelet[2781]: I0904 04:28:11.028338 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 04:28:11.028897 kubelet[2781]: I0904 04:28:11.028774 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 04:28:11.028897 kubelet[2781]: I0904 04:28:11.028867 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 04:28:11.032392 kubelet[2781]: I0904 04:28:11.031927 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 04:28:11.032566 kubelet[2781]: I0904 04:28:11.032026 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cni-path" (OuterVolumeSpecName: "cni-path") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 04:28:11.032566 kubelet[2781]: I0904 04:28:11.032491 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/332e4948-c0e8-4698-8cb1-4a68650a04f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "332e4948-c0e8-4698-8cb1-4a68650a04f3" (UID: "332e4948-c0e8-4698-8cb1-4a68650a04f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 04:28:11.032658 kubelet[2781]: I0904 04:28:11.032512 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 04:28:11.032658 kubelet[2781]: I0904 04:28:11.032525 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 4 04:28:11.032711 kubelet[2781]: I0904 04:28:11.032633 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/332e4948-c0e8-4698-8cb1-4a68650a04f3-kube-api-access-fxmqm" (OuterVolumeSpecName: "kube-api-access-fxmqm") pod "332e4948-c0e8-4698-8cb1-4a68650a04f3" (UID: "332e4948-c0e8-4698-8cb1-4a68650a04f3"). InnerVolumeSpecName "kube-api-access-fxmqm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 04:28:11.034035 kubelet[2781]: I0904 04:28:11.033987 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272d1165-2638-4a13-9d03-5f65b1025287-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 04:28:11.035387 kubelet[2781]: I0904 04:28:11.034888 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272d1165-2638-4a13-9d03-5f65b1025287-kube-api-access-zmt6l" (OuterVolumeSpecName: "kube-api-access-zmt6l") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "kube-api-access-zmt6l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 4 04:28:11.035611 kubelet[2781]: I0904 04:28:11.035564 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/272d1165-2638-4a13-9d03-5f65b1025287-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 4 04:28:11.035804 kubelet[2781]: I0904 04:28:11.035677 2781 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/272d1165-2638-4a13-9d03-5f65b1025287-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "272d1165-2638-4a13-9d03-5f65b1025287" (UID: "272d1165-2638-4a13-9d03-5f65b1025287"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 4 04:28:11.086889 kubelet[2781]: I0904 04:28:11.086389 2781 scope.go:117] "RemoveContainer" containerID="69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65" Sep 4 04:28:11.089620 containerd[1569]: time="2025-09-04T04:28:11.089578201Z" level=info msg="RemoveContainer for \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\"" Sep 4 04:28:11.092344 systemd[1]: Removed slice kubepods-besteffort-pod332e4948_c0e8_4698_8cb1_4a68650a04f3.slice - libcontainer container kubepods-besteffort-pod332e4948_c0e8_4698_8cb1_4a68650a04f3.slice. Sep 4 04:28:11.103340 systemd[1]: Removed slice kubepods-burstable-pod272d1165_2638_4a13_9d03_5f65b1025287.slice - libcontainer container kubepods-burstable-pod272d1165_2638_4a13_9d03_5f65b1025287.slice. Sep 4 04:28:11.103469 systemd[1]: kubepods-burstable-pod272d1165_2638_4a13_9d03_5f65b1025287.slice: Consumed 7.875s CPU time, 129.3M memory peak, 224K read from disk, 13.3M written to disk. Sep 4 04:28:11.109069 containerd[1569]: time="2025-09-04T04:28:11.109015492Z" level=info msg="RemoveContainer for \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" returns successfully" Sep 4 04:28:11.109492 kubelet[2781]: I0904 04:28:11.109446 2781 scope.go:117] "RemoveContainer" containerID="69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65" Sep 4 04:28:11.109785 containerd[1569]: time="2025-09-04T04:28:11.109743222Z" level=error msg="ContainerStatus for \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\": not found" Sep 4 04:28:11.111844 kubelet[2781]: E0904 04:28:11.111793 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\": not found" containerID="69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65" Sep 4 04:28:11.112033 kubelet[2781]: I0904 04:28:11.111848 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65"} err="failed to get container status \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\": rpc error: code = NotFound desc = an error occurred when try to find container \"69720d35dfb99c7f6e6df555c7ea0450b04b3c1e3a9bcfd8dbb4d068b0a9ca65\": not found" Sep 4 04:28:11.112033 kubelet[2781]: I0904 04:28:11.111913 2781 scope.go:117] "RemoveContainer" containerID="603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23" Sep 4 04:28:11.114004 containerd[1569]: time="2025-09-04T04:28:11.113946240Z" level=info msg="RemoveContainer for \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\"" Sep 4 04:28:11.120165 containerd[1569]: time="2025-09-04T04:28:11.120104716Z" level=info msg="RemoveContainer for \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" returns successfully" Sep 4 04:28:11.120401 kubelet[2781]: I0904 04:28:11.120370 2781 scope.go:117] "RemoveContainer" containerID="b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb" Sep 4 04:28:11.124159 containerd[1569]: time="2025-09-04T04:28:11.124109739Z" level=info msg="RemoveContainer for \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\"" Sep 4 04:28:11.128972 kubelet[2781]: I0904 04:28:11.128910 2781 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxmqm\" (UniqueName: \"kubernetes.io/projected/332e4948-c0e8-4698-8cb1-4a68650a04f3-kube-api-access-fxmqm\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.128972 kubelet[2781]: I0904 04:28:11.128943 2781 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.128972 kubelet[2781]: I0904 04:28:11.128954 2781 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.128972 kubelet[2781]: I0904 04:28:11.128966 2781 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/272d1165-2638-4a13-9d03-5f65b1025287-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.128972 kubelet[2781]: I0904 04:28:11.128976 2781 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/272d1165-2638-4a13-9d03-5f65b1025287-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.128972 kubelet[2781]: I0904 04:28:11.128986 2781 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zmt6l\" (UniqueName: \"kubernetes.io/projected/272d1165-2638-4a13-9d03-5f65b1025287-kube-api-access-zmt6l\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.128972 kubelet[2781]: I0904 04:28:11.128996 2781 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.129384 kubelet[2781]: I0904 04:28:11.129005 2781 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.129384 kubelet[2781]: I0904 04:28:11.129014 2781 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.129384 kubelet[2781]: I0904 04:28:11.129023 2781 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.129384 kubelet[2781]: I0904 04:28:11.129032 2781 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/272d1165-2638-4a13-9d03-5f65b1025287-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.129384 kubelet[2781]: I0904 04:28:11.129041 2781 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.129384 kubelet[2781]: I0904 04:28:11.129050 2781 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/272d1165-2638-4a13-9d03-5f65b1025287-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.129384 kubelet[2781]: I0904 04:28:11.129061 2781 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/332e4948-c0e8-4698-8cb1-4a68650a04f3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 4 04:28:11.129597 containerd[1569]: time="2025-09-04T04:28:11.129350135Z" level=info msg="RemoveContainer for \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\" returns successfully" Sep 4 04:28:11.129645 kubelet[2781]: I0904 04:28:11.129521 2781 scope.go:117] "RemoveContainer" containerID="aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7" Sep 4 04:28:11.132329 containerd[1569]: time="2025-09-04T04:28:11.132241025Z" level=info msg="RemoveContainer for \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\"" Sep 4 04:28:11.136636 containerd[1569]: time="2025-09-04T04:28:11.136600600Z" level=info msg="RemoveContainer for \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\" returns successfully" Sep 4 04:28:11.136792 kubelet[2781]: I0904 04:28:11.136747 2781 scope.go:117] "RemoveContainer" containerID="b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f" Sep 4 04:28:11.138993 containerd[1569]: time="2025-09-04T04:28:11.138339838Z" level=info msg="RemoveContainer for \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\"" Sep 4 04:28:11.142348 containerd[1569]: time="2025-09-04T04:28:11.142307410Z" level=info msg="RemoveContainer for \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\" returns successfully" Sep 4 04:28:11.142514 kubelet[2781]: I0904 04:28:11.142460 2781 scope.go:117] "RemoveContainer" containerID="559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf" Sep 4 04:28:11.144177 containerd[1569]: time="2025-09-04T04:28:11.144141237Z" level=info msg="RemoveContainer for \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\"" Sep 4 04:28:11.147799 containerd[1569]: time="2025-09-04T04:28:11.147767873Z" level=info msg="RemoveContainer for \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\" returns successfully" Sep 4 04:28:11.148110 kubelet[2781]: I0904 04:28:11.148075 2781 scope.go:117] "RemoveContainer" containerID="603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23" Sep 4 04:28:11.148345 containerd[1569]: time="2025-09-04T04:28:11.148301905Z" level=error msg="ContainerStatus for \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\": not found" Sep 4 04:28:11.148485 kubelet[2781]: E0904 04:28:11.148461 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\": not found" containerID="603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23" Sep 4 04:28:11.148540 kubelet[2781]: I0904 04:28:11.148493 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23"} err="failed to get container status \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\": rpc error: code = NotFound desc = an error occurred when try to find container \"603a78728fb87d0558a428ba2614f05af242742a967eb831be09568412de5c23\": not found" Sep 4 04:28:11.148540 kubelet[2781]: I0904 04:28:11.148515 2781 scope.go:117] "RemoveContainer" containerID="b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb" Sep 4 04:28:11.148742 containerd[1569]: time="2025-09-04T04:28:11.148709428Z" level=error msg="ContainerStatus for \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\": not found" Sep 4 04:28:11.148972 kubelet[2781]: E0904 04:28:11.148910 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\": not found" containerID="b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb" Sep 4 04:28:11.148972 kubelet[2781]: I0904 04:28:11.148953 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb"} err="failed to get container status \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"b22cf226defd761aca982ec181193f0040ce0d7932c7822cc3c2d181761513eb\": not found" Sep 4 04:28:11.149046 kubelet[2781]: I0904 04:28:11.148981 2781 scope.go:117] "RemoveContainer" containerID="aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7" Sep 4 04:28:11.149153 containerd[1569]: time="2025-09-04T04:28:11.149126429Z" level=error msg="ContainerStatus for \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\": not found" Sep 4 04:28:11.149263 kubelet[2781]: E0904 04:28:11.149238 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\": not found" containerID="aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7" Sep 4 04:28:11.149302 kubelet[2781]: I0904 04:28:11.149277 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7"} err="failed to get container status \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa1ffffa1687de8310ae6efe40181e903b645ec2fb95bf4311f9825dea8283f7\": not found" Sep 4 04:28:11.149302 kubelet[2781]: I0904 04:28:11.149293 2781 scope.go:117] "RemoveContainer" containerID="b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f" Sep 4 04:28:11.149493 containerd[1569]: time="2025-09-04T04:28:11.149448659Z" level=error msg="ContainerStatus for \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\": not found" Sep 4 04:28:11.149685 kubelet[2781]: E0904 04:28:11.149608 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\": not found" containerID="b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f" Sep 4 04:28:11.149685 kubelet[2781]: I0904 04:28:11.149646 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f"} err="failed to get container status \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6f4c602397a9e54e95f082bbede782cb9a6c9a34b9f6435e01d207b0c1f713f\": not found" Sep 4 04:28:11.149685 kubelet[2781]: I0904 04:28:11.149678 2781 scope.go:117] "RemoveContainer" containerID="559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf" Sep 4 04:28:11.150066 containerd[1569]: time="2025-09-04T04:28:11.150002810Z" level=error msg="ContainerStatus for \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\": not found" Sep 4 04:28:11.150186 kubelet[2781]: E0904 04:28:11.150162 2781 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\": not found" containerID="559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf" Sep 4 04:28:11.150225 kubelet[2781]: I0904 04:28:11.150186 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf"} err="failed to get container status \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\": rpc error: code = NotFound desc = an error occurred when try to find container \"559d97720eae9273c0fde1a3d25bd5f59a91b0f41ecac7be54036d77dd5e2daf\": not found" Sep 4 04:28:11.764217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b1f89de00f5350b7d3c46a0e6970de75f79a3f318da3902040cbe181292cd79-shm.mount: Deactivated successfully. Sep 4 04:28:11.764371 systemd[1]: var-lib-kubelet-pods-332e4948\x2dc0e8\x2d4698\x2d8cb1\x2d4a68650a04f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfxmqm.mount: Deactivated successfully. Sep 4 04:28:11.764470 systemd[1]: var-lib-kubelet-pods-272d1165\x2d2638\x2d4a13\x2d9d03\x2d5f65b1025287-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzmt6l.mount: Deactivated successfully. Sep 4 04:28:11.764590 systemd[1]: var-lib-kubelet-pods-272d1165\x2d2638\x2d4a13\x2d9d03\x2d5f65b1025287-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 4 04:28:11.764692 systemd[1]: var-lib-kubelet-pods-272d1165\x2d2638\x2d4a13\x2d9d03\x2d5f65b1025287-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 4 04:28:11.807682 kubelet[2781]: E0904 04:28:11.807606 2781 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 04:28:12.713354 sshd[4391]: Connection closed by 10.0.0.1 port 43150 Sep 4 04:28:12.714148 sshd-session[4388]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:12.729014 systemd[1]: sshd@25-10.0.0.124:22-10.0.0.1:43150.service: Deactivated successfully. Sep 4 04:28:12.731489 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 04:28:12.732520 systemd-logind[1541]: Session 26 logged out. Waiting for processes to exit. Sep 4 04:28:12.736280 systemd[1]: Started sshd@26-10.0.0.124:22-10.0.0.1:41930.service - OpenSSH per-connection server daemon (10.0.0.1:41930). Sep 4 04:28:12.737220 systemd-logind[1541]: Removed session 26. Sep 4 04:28:12.754282 kubelet[2781]: I0904 04:28:12.754223 2781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272d1165-2638-4a13-9d03-5f65b1025287" path="/var/lib/kubelet/pods/272d1165-2638-4a13-9d03-5f65b1025287/volumes" Sep 4 04:28:12.755340 kubelet[2781]: I0904 04:28:12.755295 2781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="332e4948-c0e8-4698-8cb1-4a68650a04f3" path="/var/lib/kubelet/pods/332e4948-c0e8-4698-8cb1-4a68650a04f3/volumes" Sep 4 04:28:12.803692 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 41930 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:28:12.805542 sshd-session[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:12.811168 systemd-logind[1541]: New session 27 of user core. Sep 4 04:28:12.825010 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 04:28:13.451999 sshd[4548]: Connection closed by 10.0.0.1 port 41930 Sep 4 04:28:13.452592 sshd-session[4545]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:13.466204 systemd[1]: sshd@26-10.0.0.124:22-10.0.0.1:41930.service: Deactivated successfully. Sep 4 04:28:13.469121 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 04:28:13.471333 systemd-logind[1541]: Session 27 logged out. Waiting for processes to exit. Sep 4 04:28:13.482150 systemd[1]: Started sshd@27-10.0.0.124:22-10.0.0.1:41938.service - OpenSSH per-connection server daemon (10.0.0.1:41938). Sep 4 04:28:13.483347 systemd-logind[1541]: Removed session 27. Sep 4 04:28:13.517543 systemd[1]: Created slice kubepods-burstable-pod9bfcc233_298e_4087_9f5f_dfbf87fc5b1d.slice - libcontainer container kubepods-burstable-pod9bfcc233_298e_4087_9f5f_dfbf87fc5b1d.slice. Sep 4 04:28:13.544409 kubelet[2781]: I0904 04:28:13.544335 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-cilium-cgroup\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.544409 kubelet[2781]: I0904 04:28:13.544381 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-cni-path\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.544409 kubelet[2781]: I0904 04:28:13.544399 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-lib-modules\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.544942 kubelet[2781]: I0904 04:28:13.544451 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-clustermesh-secrets\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.544942 kubelet[2781]: I0904 04:28:13.544468 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-xtables-lock\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.544942 kubelet[2781]: I0904 04:28:13.544585 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-hostproc\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.544942 kubelet[2781]: I0904 04:28:13.544605 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-host-proc-sys-net\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.544942 kubelet[2781]: I0904 04:28:13.544674 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-hubble-tls\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.544942 kubelet[2781]: I0904 04:28:13.544730 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-etc-cni-netd\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.545080 kubelet[2781]: I0904 04:28:13.544755 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lprtn\" (UniqueName: \"kubernetes.io/projected/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-kube-api-access-lprtn\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.545080 kubelet[2781]: I0904 04:28:13.544771 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-cilium-ipsec-secrets\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.545080 kubelet[2781]: I0904 04:28:13.544791 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-host-proc-sys-kernel\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.545080 kubelet[2781]: I0904 04:28:13.544807 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-bpf-maps\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.545080 kubelet[2781]: I0904 04:28:13.544846 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-cilium-run\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.545194 kubelet[2781]: I0904 04:28:13.545000 2781 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bfcc233-298e-4087-9f5f-dfbf87fc5b1d-cilium-config-path\") pod \"cilium-llg25\" (UID: \"9bfcc233-298e-4087-9f5f-dfbf87fc5b1d\") " pod="kube-system/cilium-llg25" Sep 4 04:28:13.548242 sshd[4560]: Accepted publickey for core from 10.0.0.1 port 41938 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:28:13.550011 sshd-session[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:13.556460 systemd-logind[1541]: New session 28 of user core. Sep 4 04:28:13.563037 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 04:28:13.616996 sshd[4566]: Connection closed by 10.0.0.1 port 41938 Sep 4 04:28:13.617528 sshd-session[4560]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:13.634081 systemd[1]: sshd@27-10.0.0.124:22-10.0.0.1:41938.service: Deactivated successfully. Sep 4 04:28:13.636513 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 04:28:13.637486 systemd-logind[1541]: Session 28 logged out. Waiting for processes to exit. Sep 4 04:28:13.640685 systemd[1]: Started sshd@28-10.0.0.124:22-10.0.0.1:41952.service - OpenSSH per-connection server daemon (10.0.0.1:41952). Sep 4 04:28:13.641593 systemd-logind[1541]: Removed session 28. Sep 4 04:28:13.694085 sshd[4573]: Accepted publickey for core from 10.0.0.1 port 41952 ssh2: RSA SHA256:9+vpZc6EfwWxHenC1ZKsuuGVz7bQEj3BE+z2aG6aI0U Sep 4 04:28:13.695923 sshd-session[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 04:28:13.701418 systemd-logind[1541]: New session 29 of user core. Sep 4 04:28:13.715007 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 4 04:28:13.752050 kubelet[2781]: E0904 04:28:13.751970 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:13.821926 kubelet[2781]: E0904 04:28:13.821834 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:13.823378 containerd[1569]: time="2025-09-04T04:28:13.823332455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-llg25,Uid:9bfcc233-298e-4087-9f5f-dfbf87fc5b1d,Namespace:kube-system,Attempt:0,}" Sep 4 04:28:13.843894 containerd[1569]: time="2025-09-04T04:28:13.843817113Z" level=info msg="connecting to shim 928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17" address="unix:///run/containerd/s/34a38d73d4cd91aa63d54671bf47f5e4b09d59ab7971a8963a06e3fe59b9c4af" namespace=k8s.io protocol=ttrpc version=3 Sep 4 04:28:13.871075 systemd[1]: Started cri-containerd-928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17.scope - libcontainer container 928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17. Sep 4 04:28:13.903059 containerd[1569]: time="2025-09-04T04:28:13.903008037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-llg25,Uid:9bfcc233-298e-4087-9f5f-dfbf87fc5b1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\"" Sep 4 04:28:13.904345 kubelet[2781]: E0904 04:28:13.904307 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:13.921923 containerd[1569]: time="2025-09-04T04:28:13.921801360Z" level=info msg="CreateContainer within sandbox \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 4 04:28:13.934680 containerd[1569]: time="2025-09-04T04:28:13.934614510Z" level=info msg="Container a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:28:13.942064 containerd[1569]: time="2025-09-04T04:28:13.942008422Z" level=info msg="CreateContainer within sandbox \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a\"" Sep 4 04:28:13.944070 containerd[1569]: time="2025-09-04T04:28:13.944035653Z" level=info msg="StartContainer for \"a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a\"" Sep 4 04:28:13.946889 containerd[1569]: time="2025-09-04T04:28:13.945949029Z" level=info msg="connecting to shim a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a" address="unix:///run/containerd/s/34a38d73d4cd91aa63d54671bf47f5e4b09d59ab7971a8963a06e3fe59b9c4af" protocol=ttrpc version=3 Sep 4 04:28:13.981214 systemd[1]: Started cri-containerd-a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a.scope - libcontainer container a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a. Sep 4 04:28:14.018467 containerd[1569]: time="2025-09-04T04:28:14.018411455Z" level=info msg="StartContainer for \"a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a\" returns successfully" Sep 4 04:28:14.030521 systemd[1]: cri-containerd-a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a.scope: Deactivated successfully. Sep 4 04:28:14.032311 containerd[1569]: time="2025-09-04T04:28:14.032197583Z" level=info msg="received exit event container_id:\"a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a\" id:\"a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a\" pid:4647 exited_at:{seconds:1756960094 nanos:31802715}" Sep 4 04:28:14.032571 containerd[1569]: time="2025-09-04T04:28:14.032529852Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a\" id:\"a57c41852b83ef9217a5d87a2e895b3683316bea7613e0760176b85fa57b188a\" pid:4647 exited_at:{seconds:1756960094 nanos:31802715}" Sep 4 04:28:14.105752 kubelet[2781]: E0904 04:28:14.105703 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:14.112286 containerd[1569]: time="2025-09-04T04:28:14.112203366Z" level=info msg="CreateContainer within sandbox \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 4 04:28:14.120072 containerd[1569]: time="2025-09-04T04:28:14.120019694Z" level=info msg="Container 3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:28:14.127729 containerd[1569]: time="2025-09-04T04:28:14.127651422Z" level=info msg="CreateContainer within sandbox \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071\"" Sep 4 04:28:14.128456 containerd[1569]: time="2025-09-04T04:28:14.128392216Z" level=info msg="StartContainer for \"3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071\"" Sep 4 04:28:14.129751 containerd[1569]: time="2025-09-04T04:28:14.129717097Z" level=info msg="connecting to shim 3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071" address="unix:///run/containerd/s/34a38d73d4cd91aa63d54671bf47f5e4b09d59ab7971a8963a06e3fe59b9c4af" protocol=ttrpc version=3 Sep 4 04:28:14.155993 systemd[1]: Started cri-containerd-3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071.scope - libcontainer container 3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071. Sep 4 04:28:14.193919 containerd[1569]: time="2025-09-04T04:28:14.193869655Z" level=info msg="StartContainer for \"3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071\" returns successfully" Sep 4 04:28:14.199937 systemd[1]: cri-containerd-3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071.scope: Deactivated successfully. Sep 4 04:28:14.201308 containerd[1569]: time="2025-09-04T04:28:14.201219439Z" level=info msg="received exit event container_id:\"3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071\" id:\"3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071\" pid:4693 exited_at:{seconds:1756960094 nanos:200926274}" Sep 4 04:28:14.201586 containerd[1569]: time="2025-09-04T04:28:14.201539356Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071\" id:\"3ff9125cbb6c1f03580d0a23d004631286b47e712f61ba9be35d6bceb979f071\" pid:4693 exited_at:{seconds:1756960094 nanos:200926274}" Sep 4 04:28:15.109079 kubelet[2781]: E0904 04:28:15.109012 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:15.145074 containerd[1569]: time="2025-09-04T04:28:15.144944698Z" level=info msg="CreateContainer within sandbox \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 4 04:28:15.159642 containerd[1569]: time="2025-09-04T04:28:15.159582323Z" level=info msg="Container 62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:28:15.172238 containerd[1569]: time="2025-09-04T04:28:15.172185304Z" level=info msg="CreateContainer within sandbox \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737\"" Sep 4 04:28:15.172845 containerd[1569]: time="2025-09-04T04:28:15.172801281Z" level=info msg="StartContainer for \"62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737\"" Sep 4 04:28:15.174885 containerd[1569]: time="2025-09-04T04:28:15.174192537Z" level=info msg="connecting to shim 62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737" address="unix:///run/containerd/s/34a38d73d4cd91aa63d54671bf47f5e4b09d59ab7971a8963a06e3fe59b9c4af" protocol=ttrpc version=3 Sep 4 04:28:15.211174 systemd[1]: Started cri-containerd-62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737.scope - libcontainer container 62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737. Sep 4 04:28:15.264128 systemd[1]: cri-containerd-62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737.scope: Deactivated successfully. Sep 4 04:28:15.266720 containerd[1569]: time="2025-09-04T04:28:15.266682191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737\" id:\"62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737\" pid:4738 exited_at:{seconds:1756960095 nanos:266426908}" Sep 4 04:28:15.266822 containerd[1569]: time="2025-09-04T04:28:15.266779915Z" level=info msg="received exit event container_id:\"62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737\" id:\"62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737\" pid:4738 exited_at:{seconds:1756960095 nanos:266426908}" Sep 4 04:28:15.266973 containerd[1569]: time="2025-09-04T04:28:15.266929389Z" level=info msg="StartContainer for \"62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737\" returns successfully" Sep 4 04:28:15.293564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62c04f92e16fd999c04c0ec6c4a90465f94272feabb2ffb323a9e03d49f8c737-rootfs.mount: Deactivated successfully. Sep 4 04:28:16.115116 kubelet[2781]: E0904 04:28:16.115074 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:16.143434 containerd[1569]: time="2025-09-04T04:28:16.143356920Z" level=info msg="CreateContainer within sandbox \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 4 04:28:16.157893 containerd[1569]: time="2025-09-04T04:28:16.157817504Z" level=info msg="Container b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:28:16.166564 containerd[1569]: time="2025-09-04T04:28:16.166496699Z" level=info msg="CreateContainer within sandbox \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882\"" Sep 4 04:28:16.167096 containerd[1569]: time="2025-09-04T04:28:16.167066749Z" level=info msg="StartContainer for \"b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882\"" Sep 4 04:28:16.168797 containerd[1569]: time="2025-09-04T04:28:16.168760416Z" level=info msg="connecting to shim b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882" address="unix:///run/containerd/s/34a38d73d4cd91aa63d54671bf47f5e4b09d59ab7971a8963a06e3fe59b9c4af" protocol=ttrpc version=3 Sep 4 04:28:16.189089 systemd[1]: Started cri-containerd-b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882.scope - libcontainer container b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882. Sep 4 04:28:16.223507 systemd[1]: cri-containerd-b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882.scope: Deactivated successfully. Sep 4 04:28:16.224724 containerd[1569]: time="2025-09-04T04:28:16.224186996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882\" id:\"b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882\" pid:4778 exited_at:{seconds:1756960096 nanos:223776369}" Sep 4 04:28:16.226766 containerd[1569]: time="2025-09-04T04:28:16.226703672Z" level=info msg="received exit event container_id:\"b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882\" id:\"b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882\" pid:4778 exited_at:{seconds:1756960096 nanos:223776369}" Sep 4 04:28:16.236687 containerd[1569]: time="2025-09-04T04:28:16.236610202Z" level=info msg="StartContainer for \"b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882\" returns successfully" Sep 4 04:28:16.251547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2b6627a5660ff1bfd87add0a2c9a90e0dfc54820c2a0e8a69473548d9223882-rootfs.mount: Deactivated successfully. Sep 4 04:28:16.809231 kubelet[2781]: E0904 04:28:16.809166 2781 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 4 04:28:17.120658 kubelet[2781]: E0904 04:28:17.120499 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:17.229921 containerd[1569]: time="2025-09-04T04:28:17.229460005Z" level=info msg="CreateContainer within sandbox \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 4 04:28:17.319736 containerd[1569]: time="2025-09-04T04:28:17.319676310Z" level=info msg="Container e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d: CDI devices from CRI Config.CDIDevices: []" Sep 4 04:28:17.323949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2828609134.mount: Deactivated successfully. Sep 4 04:28:17.329799 containerd[1569]: time="2025-09-04T04:28:17.329746466Z" level=info msg="CreateContainer within sandbox \"928b186ebbcf44f25c34e76b7f191ea7824ba564fdf1f0f5970140c751914b17\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d\"" Sep 4 04:28:17.330527 containerd[1569]: time="2025-09-04T04:28:17.330497197Z" level=info msg="StartContainer for \"e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d\"" Sep 4 04:28:17.331764 containerd[1569]: time="2025-09-04T04:28:17.331712729Z" level=info msg="connecting to shim e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d" address="unix:///run/containerd/s/34a38d73d4cd91aa63d54671bf47f5e4b09d59ab7971a8963a06e3fe59b9c4af" protocol=ttrpc version=3 Sep 4 04:28:17.364056 systemd[1]: Started cri-containerd-e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d.scope - libcontainer container e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d. Sep 4 04:28:17.406569 containerd[1569]: time="2025-09-04T04:28:17.406406265Z" level=info msg="StartContainer for \"e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d\" returns successfully" Sep 4 04:28:17.497401 containerd[1569]: time="2025-09-04T04:28:17.497352923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d\" id:\"af5e9ce79e94ef5aad237c5c33d205e50de409d348dab80e4fe77f73c926a2f5\" pid:4846 exited_at:{seconds:1756960097 nanos:496458540}" Sep 4 04:28:17.900892 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 4 04:28:18.135283 kubelet[2781]: E0904 04:28:18.135227 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:18.177948 kubelet[2781]: I0904 04:28:18.177754 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-llg25" podStartSLOduration=5.177733389 podStartE2EDuration="5.177733389s" podCreationTimestamp="2025-09-04 04:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-04 04:28:18.176761218 +0000 UTC m=+91.526081508" watchObservedRunningTime="2025-09-04 04:28:18.177733389 +0000 UTC m=+91.527053679" Sep 4 04:28:19.164686 kubelet[2781]: I0904 04:28:19.164629 2781 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-04T04:28:19Z","lastTransitionTime":"2025-09-04T04:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 4 04:28:19.751799 kubelet[2781]: E0904 04:28:19.751733 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:19.752025 kubelet[2781]: E0904 04:28:19.751881 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:19.822983 kubelet[2781]: E0904 04:28:19.822907 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:20.727892 containerd[1569]: time="2025-09-04T04:28:20.727804559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d\" id:\"77d2fabcac4cb03d8342e8133c238a3cff8d142b466705a0e4907c6d7db839d2\" pid:5237 exit_status:1 exited_at:{seconds:1756960100 nanos:727397358}" Sep 4 04:28:21.277631 systemd-networkd[1493]: lxc_health: Link UP Sep 4 04:28:21.278280 systemd-networkd[1493]: lxc_health: Gained carrier Sep 4 04:28:21.824893 kubelet[2781]: E0904 04:28:21.824834 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:22.144980 kubelet[2781]: E0904 04:28:22.144814 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:22.862104 containerd[1569]: time="2025-09-04T04:28:22.862039498Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d\" id:\"eea9822a3d7990f67e639e44973050e6bd539c7162f1d0499354a209f76b5969\" pid:5388 exited_at:{seconds:1756960102 nanos:861526128}" Sep 4 04:28:23.146694 kubelet[2781]: E0904 04:28:23.146527 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:23.282216 systemd-networkd[1493]: lxc_health: Gained IPv6LL Sep 4 04:28:23.751896 kubelet[2781]: E0904 04:28:23.751805 2781 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 04:28:24.979484 containerd[1569]: time="2025-09-04T04:28:24.979361440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d\" id:\"af1de5ae5ef10b115ece1ede957292293c6437f013ffbc1d20e737fe3953de1f\" pid:5415 exited_at:{seconds:1756960104 nanos:978385695}" Sep 4 04:28:27.075673 containerd[1569]: time="2025-09-04T04:28:27.075607531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e750336e78cf801f44edbab234bd41aa0f7f14ef89b62199e26df12a85a1db0d\" id:\"51a33a062a6ba253557eb7cc95b8e589eec0953252730f559de3307bae455c9a\" pid:5446 exited_at:{seconds:1756960107 nanos:75081307}" Sep 4 04:28:27.084713 sshd[4580]: Connection closed by 10.0.0.1 port 41952 Sep 4 04:28:27.087103 sshd-session[4573]: pam_unix(sshd:session): session closed for user core Sep 4 04:28:27.093100 systemd-logind[1541]: Session 29 logged out. Waiting for processes to exit. Sep 4 04:28:27.093793 systemd[1]: sshd@28-10.0.0.124:22-10.0.0.1:41952.service: Deactivated successfully. Sep 4 04:28:27.096735 systemd[1]: session-29.scope: Deactivated successfully. Sep 4 04:28:27.098895 systemd-logind[1541]: Removed session 29.