Sep 9 00:17:07.840778 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:16:40 -00 2025 Sep 9 00:17:07.840805 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:17:07.840814 kernel: BIOS-provided physical RAM map: Sep 9 00:17:07.840821 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 00:17:07.840828 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 00:17:07.840834 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 00:17:07.840851 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 9 00:17:07.840863 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 9 00:17:07.840875 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 00:17:07.840883 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 9 00:17:07.840892 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:17:07.840900 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 00:17:07.840909 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:17:07.840917 kernel: NX (Execute Disable) protection: active Sep 9 00:17:07.840930 kernel: APIC: Static calls initialized Sep 9 00:17:07.840937 kernel: SMBIOS 2.8 present. Sep 9 00:17:07.840947 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 9 00:17:07.840955 kernel: DMI: Memory slots populated: 1/1 Sep 9 00:17:07.840962 kernel: Hypervisor detected: KVM Sep 9 00:17:07.840969 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:17:07.840976 kernel: kvm-clock: using sched offset of 4430695318 cycles Sep 9 00:17:07.840984 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:17:07.840992 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:17:07.841001 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:17:07.841009 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:17:07.841017 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 9 00:17:07.841024 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 00:17:07.841032 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:17:07.841039 kernel: Using GB pages for direct mapping Sep 9 00:17:07.841046 kernel: ACPI: Early table checksum verification disabled Sep 9 00:17:07.841054 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 9 00:17:07.841061 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:17:07.841071 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:17:07.841078 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:17:07.841098 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 9 00:17:07.841106 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:17:07.841114 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:17:07.841133 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:17:07.841152 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:17:07.841159 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 9 00:17:07.841174 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 9 00:17:07.841181 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 9 00:17:07.841189 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 9 00:17:07.841196 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 9 00:17:07.841204 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 9 00:17:07.841212 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 9 00:17:07.841221 kernel: No NUMA configuration found Sep 9 00:17:07.841229 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 9 00:17:07.841237 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 9 00:17:07.841244 kernel: Zone ranges: Sep 9 00:17:07.841252 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:17:07.841260 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 9 00:17:07.841267 kernel: Normal empty Sep 9 00:17:07.841275 kernel: Device empty Sep 9 00:17:07.841282 kernel: Movable zone start for each node Sep 9 00:17:07.841292 kernel: Early memory node ranges Sep 9 00:17:07.841300 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 00:17:07.841307 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 9 00:17:07.841315 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 9 00:17:07.841322 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:17:07.841330 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 00:17:07.841338 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 9 00:17:07.841345 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:17:07.841356 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:17:07.841364 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:17:07.841374 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:17:07.841383 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:17:07.841393 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:17:07.841401 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:17:07.841409 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:17:07.841416 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:17:07.841424 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:17:07.841431 kernel: TSC deadline timer available Sep 9 00:17:07.841439 kernel: CPU topo: Max. logical packages: 1 Sep 9 00:17:07.841449 kernel: CPU topo: Max. logical dies: 1 Sep 9 00:17:07.841456 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:17:07.841463 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:17:07.841471 kernel: CPU topo: Num. cores per package: 4 Sep 9 00:17:07.841478 kernel: CPU topo: Num. threads per package: 4 Sep 9 00:17:07.841494 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 00:17:07.841502 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:17:07.841510 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:17:07.841517 kernel: kvm-guest: setup PV sched yield Sep 9 00:17:07.841528 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 9 00:17:07.841536 kernel: Booting paravirtualized kernel on KVM Sep 9 00:17:07.841544 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:17:07.841551 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:17:07.841559 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 00:17:07.841567 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 00:17:07.841575 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:17:07.841582 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:17:07.841590 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:17:07.841601 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:17:07.841609 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:17:07.841616 kernel: random: crng init done Sep 9 00:17:07.841624 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:17:07.841632 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:17:07.841639 kernel: Fallback order for Node 0: 0 Sep 9 00:17:07.841647 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 9 00:17:07.841654 kernel: Policy zone: DMA32 Sep 9 00:17:07.841664 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:17:07.841672 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:17:07.841679 kernel: ftrace: allocating 40099 entries in 157 pages Sep 9 00:17:07.841687 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:17:07.841694 kernel: Dynamic Preempt: voluntary Sep 9 00:17:07.841702 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:17:07.841710 kernel: rcu: RCU event tracing is enabled. Sep 9 00:17:07.841718 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:17:07.841726 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:17:07.841738 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:17:07.841746 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:17:07.841754 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:17:07.841761 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:17:07.841769 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:17:07.841777 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:17:07.841784 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:17:07.841792 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:17:07.841800 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:17:07.841817 kernel: Console: colour VGA+ 80x25 Sep 9 00:17:07.841825 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:17:07.841833 kernel: ACPI: Core revision 20240827 Sep 9 00:17:07.841854 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:17:07.841865 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:17:07.841875 kernel: x2apic enabled Sep 9 00:17:07.841888 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:17:07.841898 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:17:07.841906 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:17:07.841917 kernel: kvm-guest: setup PV IPIs Sep 9 00:17:07.841925 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:17:07.841933 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:17:07.841941 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:17:07.841949 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:17:07.841957 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:17:07.841965 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:17:07.841975 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:17:07.841983 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:17:07.841991 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:17:07.841999 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:17:07.842007 kernel: active return thunk: retbleed_return_thunk Sep 9 00:17:07.842015 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:17:07.842023 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:17:07.842031 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:17:07.842039 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:17:07.842050 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:17:07.842058 kernel: active return thunk: srso_return_thunk Sep 9 00:17:07.842066 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:17:07.842074 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:17:07.842082 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:17:07.842103 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:17:07.842111 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:17:07.842119 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:17:07.842127 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:17:07.842137 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:17:07.842145 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:17:07.842153 kernel: landlock: Up and running. Sep 9 00:17:07.842161 kernel: SELinux: Initializing. Sep 9 00:17:07.842172 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:17:07.842180 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:17:07.842188 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:17:07.842196 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:17:07.842204 kernel: ... version: 0 Sep 9 00:17:07.842213 kernel: ... bit width: 48 Sep 9 00:17:07.842221 kernel: ... generic registers: 6 Sep 9 00:17:07.842229 kernel: ... value mask: 0000ffffffffffff Sep 9 00:17:07.842237 kernel: ... max period: 00007fffffffffff Sep 9 00:17:07.842245 kernel: ... fixed-purpose events: 0 Sep 9 00:17:07.842253 kernel: ... event mask: 000000000000003f Sep 9 00:17:07.842260 kernel: signal: max sigframe size: 1776 Sep 9 00:17:07.842268 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:17:07.842276 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:17:07.842287 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 00:17:07.842295 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:17:07.842303 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:17:07.842311 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:17:07.842319 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:17:07.842327 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:17:07.842335 kernel: Memory: 2430968K/2571752K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 134852K reserved, 0K cma-reserved) Sep 9 00:17:07.842343 kernel: devtmpfs: initialized Sep 9 00:17:07.842351 kernel: x86/mm: Memory block size: 128MB Sep 9 00:17:07.842361 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:17:07.842369 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:17:07.842377 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:17:07.842386 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:17:07.842394 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:17:07.842404 kernel: audit: type=2000 audit(1757377025.421:1): state=initialized audit_enabled=0 res=1 Sep 9 00:17:07.842414 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:17:07.842423 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:17:07.842433 kernel: cpuidle: using governor menu Sep 9 00:17:07.842446 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:17:07.842456 kernel: dca service started, version 1.12.1 Sep 9 00:17:07.842466 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 9 00:17:07.842476 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 00:17:07.842486 kernel: PCI: Using configuration type 1 for base access Sep 9 00:17:07.842496 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:17:07.842506 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:17:07.842516 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:17:07.842526 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:17:07.842539 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:17:07.842548 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:17:07.842558 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:17:07.842568 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:17:07.842578 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:17:07.842588 kernel: ACPI: Interpreter enabled Sep 9 00:17:07.842597 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:17:07.842607 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:17:07.842617 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:17:07.842627 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:17:07.842635 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:17:07.842643 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:17:07.843150 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:17:07.843284 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:17:07.843405 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:17:07.843416 kernel: PCI host bridge to bus 0000:00 Sep 9 00:17:07.843558 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:17:07.843670 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:17:07.843783 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:17:07.843913 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 9 00:17:07.844023 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:17:07.844170 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 9 00:17:07.844295 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:17:07.844482 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:17:07.844639 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:17:07.844764 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 9 00:17:07.844907 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 9 00:17:07.845036 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 9 00:17:07.845195 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:17:07.845343 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 00:17:07.845518 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 9 00:17:07.845641 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 9 00:17:07.845761 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 9 00:17:07.845918 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 00:17:07.846044 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 9 00:17:07.846187 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 9 00:17:07.846319 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 9 00:17:07.846460 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 00:17:07.846582 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 9 00:17:07.846702 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 9 00:17:07.846820 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 9 00:17:07.846965 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 9 00:17:07.847118 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:17:07.847247 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:17:07.847409 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 00:17:07.847562 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 9 00:17:07.847693 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 9 00:17:07.847835 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 00:17:07.848058 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 9 00:17:07.848076 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:17:07.848084 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:17:07.848107 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:17:07.848115 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:17:07.848123 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:17:07.848131 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:17:07.848139 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:17:07.848147 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:17:07.848155 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:17:07.848165 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:17:07.848173 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:17:07.848181 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:17:07.848189 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:17:07.848197 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:17:07.848204 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:17:07.848212 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:17:07.848220 kernel: iommu: Default domain type: Translated Sep 9 00:17:07.848228 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:17:07.848238 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:17:07.848246 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:17:07.848254 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 00:17:07.848262 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 9 00:17:07.848385 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:17:07.848505 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:17:07.848625 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:17:07.848635 kernel: vgaarb: loaded Sep 9 00:17:07.848644 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:17:07.848655 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:17:07.848663 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:17:07.848671 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:17:07.848679 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:17:07.848687 kernel: pnp: PnP ACPI init Sep 9 00:17:07.848832 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 00:17:07.848857 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:17:07.848867 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:17:07.848882 kernel: NET: Registered PF_INET protocol family Sep 9 00:17:07.848892 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:17:07.848903 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:17:07.848913 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:17:07.848924 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:17:07.848932 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:17:07.848940 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:17:07.848948 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:17:07.848956 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:17:07.848967 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:17:07.848975 kernel: NET: Registered PF_XDP protocol family Sep 9 00:17:07.849106 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:17:07.849235 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:17:07.849373 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:17:07.849503 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 9 00:17:07.849614 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 00:17:07.849728 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 9 00:17:07.849743 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:17:07.849752 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:17:07.849760 kernel: Initialise system trusted keyrings Sep 9 00:17:07.849768 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:17:07.849776 kernel: Key type asymmetric registered Sep 9 00:17:07.849784 kernel: Asymmetric key parser 'x509' registered Sep 9 00:17:07.849792 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:17:07.849800 kernel: io scheduler mq-deadline registered Sep 9 00:17:07.849808 kernel: io scheduler kyber registered Sep 9 00:17:07.849818 kernel: io scheduler bfq registered Sep 9 00:17:07.849826 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:17:07.849845 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:17:07.849866 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:17:07.849877 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:17:07.849890 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:17:07.849901 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:17:07.849911 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:17:07.849921 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:17:07.849932 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:17:07.850083 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:17:07.850110 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:17:07.850226 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:17:07.850351 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:17:07 UTC (1757377027) Sep 9 00:17:07.850484 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 9 00:17:07.850497 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:17:07.850507 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:17:07.850524 kernel: Segment Routing with IPv6 Sep 9 00:17:07.850534 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:17:07.850544 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:17:07.850555 kernel: Key type dns_resolver registered Sep 9 00:17:07.850565 kernel: IPI shorthand broadcast: enabled Sep 9 00:17:07.850575 kernel: sched_clock: Marking stable (3236007033, 130132727)->(3397998875, -31859115) Sep 9 00:17:07.850586 kernel: registered taskstats version 1 Sep 9 00:17:07.850596 kernel: Loading compiled-in X.509 certificates Sep 9 00:17:07.850607 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 08d0986253b18b7fd74c2cc5404da4ba92260e75' Sep 9 00:17:07.850620 kernel: Demotion targets for Node 0: null Sep 9 00:17:07.850630 kernel: Key type .fscrypt registered Sep 9 00:17:07.850638 kernel: Key type fscrypt-provisioning registered Sep 9 00:17:07.850646 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:17:07.850654 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:17:07.850662 kernel: ima: No architecture policies found Sep 9 00:17:07.850671 kernel: clk: Disabling unused clocks Sep 9 00:17:07.850679 kernel: Warning: unable to open an initial console. Sep 9 00:17:07.850689 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 9 00:17:07.850698 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:17:07.850706 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 9 00:17:07.850714 kernel: Run /init as init process Sep 9 00:17:07.850723 kernel: with arguments: Sep 9 00:17:07.850731 kernel: /init Sep 9 00:17:07.850739 kernel: with environment: Sep 9 00:17:07.850747 kernel: HOME=/ Sep 9 00:17:07.850755 kernel: TERM=linux Sep 9 00:17:07.850764 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:17:07.850780 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:17:07.850803 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:17:07.850815 systemd[1]: Detected virtualization kvm. Sep 9 00:17:07.850823 systemd[1]: Detected architecture x86-64. Sep 9 00:17:07.850832 systemd[1]: Running in initrd. Sep 9 00:17:07.850854 systemd[1]: No hostname configured, using default hostname. Sep 9 00:17:07.850866 systemd[1]: Hostname set to . Sep 9 00:17:07.850874 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:17:07.850883 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:17:07.850891 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:17:07.850900 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:17:07.850910 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:17:07.850919 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:17:07.850930 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:17:07.850940 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:17:07.850950 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:17:07.850958 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:17:07.850967 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:17:07.850976 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:17:07.850985 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:17:07.850996 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:17:07.851004 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:17:07.851013 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:17:07.851022 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:17:07.851030 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:17:07.851039 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:17:07.851048 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:17:07.851057 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:17:07.851068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:17:07.851077 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:17:07.851099 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:17:07.851108 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:17:07.851117 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:17:07.851128 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:17:07.851140 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:17:07.851149 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:17:07.851158 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:17:07.851166 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:17:07.851175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:17:07.851184 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:17:07.851195 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:17:07.851204 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:17:07.851213 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:17:07.851247 systemd-journald[220]: Collecting audit messages is disabled. Sep 9 00:17:07.851270 systemd-journald[220]: Journal started Sep 9 00:17:07.851292 systemd-journald[220]: Runtime Journal (/run/log/journal/eb06a26396924df7a27d32418e9d0f36) is 6M, max 48.6M, 42.5M free. Sep 9 00:17:07.852117 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:17:07.882050 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:17:07.933363 systemd-modules-load[222]: Inserted module 'overlay' Sep 9 00:17:07.935365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:17:07.941204 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:17:07.944971 systemd-tmpfiles[234]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:17:07.948905 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:17:07.952022 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:17:07.955814 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:17:07.964123 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:17:07.966055 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 9 00:17:07.967081 kernel: Bridge firewalling registered Sep 9 00:17:07.967619 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:17:07.970005 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:17:07.972678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:17:07.975528 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:17:07.976370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:17:07.999156 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:17:08.003393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:17:08.006804 dracut-cmdline[257]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:17:08.050221 systemd-resolved[269]: Positive Trust Anchors: Sep 9 00:17:08.050243 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:17:08.050275 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:17:08.053670 systemd-resolved[269]: Defaulting to hostname 'linux'. Sep 9 00:17:08.054997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:17:08.060493 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:17:08.113124 kernel: SCSI subsystem initialized Sep 9 00:17:08.122108 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:17:08.133117 kernel: iscsi: registered transport (tcp) Sep 9 00:17:08.154159 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:17:08.154206 kernel: QLogic iSCSI HBA Driver Sep 9 00:17:08.173779 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:17:08.195494 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:17:08.196701 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:17:08.250364 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:17:08.252861 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:17:08.316132 kernel: raid6: avx2x4 gen() 30369 MB/s Sep 9 00:17:08.333112 kernel: raid6: avx2x2 gen() 30890 MB/s Sep 9 00:17:08.350177 kernel: raid6: avx2x1 gen() 25417 MB/s Sep 9 00:17:08.350201 kernel: raid6: using algorithm avx2x2 gen() 30890 MB/s Sep 9 00:17:08.368194 kernel: raid6: .... xor() 19407 MB/s, rmw enabled Sep 9 00:17:08.368242 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:17:08.389126 kernel: xor: automatically using best checksumming function avx Sep 9 00:17:08.583142 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:17:08.591330 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:17:08.594056 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:17:08.628581 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 9 00:17:08.634256 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:17:08.637532 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:17:08.661562 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Sep 9 00:17:08.689712 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:17:08.693297 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:17:08.784141 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:17:08.787363 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:17:08.875133 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:17:08.893220 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:17:08.921729 kernel: AES CTR mode by8 optimization enabled Sep 9 00:17:08.921749 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:17:08.925656 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 00:17:08.925676 kernel: libata version 3.00 loaded. Sep 9 00:17:08.928746 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:17:08.928775 kernel: GPT:9289727 != 19775487 Sep 9 00:17:08.928789 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:17:08.928803 kernel: GPT:9289727 != 19775487 Sep 9 00:17:08.929194 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:17:08.932785 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:17:08.932801 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:17:08.929367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:17:08.933336 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:17:08.942063 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:17:08.943420 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:17:08.943439 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 00:17:08.943769 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 00:17:08.944202 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:17:08.936377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:17:08.946612 kernel: scsi host0: ahci Sep 9 00:17:08.946788 kernel: scsi host1: ahci Sep 9 00:17:08.944234 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:17:08.949132 kernel: scsi host2: ahci Sep 9 00:17:08.949326 kernel: scsi host3: ahci Sep 9 00:17:08.953106 kernel: scsi host4: ahci Sep 9 00:17:08.956757 kernel: scsi host5: ahci Sep 9 00:17:08.956990 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 9 00:17:08.957003 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 9 00:17:08.957019 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 9 00:17:08.958943 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 9 00:17:08.960043 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 9 00:17:08.960063 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 9 00:17:08.993914 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:17:09.003967 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:17:09.016801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:17:09.025728 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:17:09.025853 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:17:09.028422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:17:09.090880 disk-uuid[636]: Primary Header is updated. Sep 9 00:17:09.090880 disk-uuid[636]: Secondary Entries is updated. Sep 9 00:17:09.090880 disk-uuid[636]: Secondary Header is updated. Sep 9 00:17:09.111680 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:17:09.111701 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:17:09.110879 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:17:09.273074 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:17:09.273167 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:17:09.273179 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:17:09.273190 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:17:09.274126 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:17:09.275123 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:17:09.276349 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:17:09.276366 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:17:09.277528 kernel: ata3.00: applying bridge limits Sep 9 00:17:09.278361 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:17:09.278375 kernel: ata3.00: configured for UDMA/100 Sep 9 00:17:09.281147 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:17:09.332153 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:17:09.332472 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:17:09.358116 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:17:09.810606 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:17:09.811445 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:17:09.814051 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:17:09.814457 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:17:09.815857 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:17:09.853679 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:17:10.130117 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:17:10.130185 disk-uuid[637]: The operation has completed successfully. Sep 9 00:17:10.173152 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:17:10.173304 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:17:10.206849 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:17:10.233058 sh[666]: Success Sep 9 00:17:10.253154 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:17:10.253244 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:17:10.253262 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:17:10.266138 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 00:17:10.299503 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:17:10.301911 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:17:10.325995 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:17:10.331131 kernel: BTRFS: device fsid c483a4f4-f0a7-42f4-ac8d-111955dab3a7 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (678) Sep 9 00:17:10.333660 kernel: BTRFS info (device dm-0): first mount of filesystem c483a4f4-f0a7-42f4-ac8d-111955dab3a7 Sep 9 00:17:10.333730 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:17:10.339672 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:17:10.339708 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:17:10.341294 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:17:10.343943 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:17:10.346458 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:17:10.349598 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:17:10.352638 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:17:10.386120 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Sep 9 00:17:10.388549 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:17:10.388593 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:17:10.393131 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:17:10.393166 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:17:10.399120 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:17:10.399819 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:17:10.401656 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:17:10.498233 ignition[752]: Ignition 2.21.0 Sep 9 00:17:10.498684 ignition[752]: Stage: fetch-offline Sep 9 00:17:10.498744 ignition[752]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:17:10.498759 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:17:10.498894 ignition[752]: parsed url from cmdline: "" Sep 9 00:17:10.498899 ignition[752]: no config URL provided Sep 9 00:17:10.498906 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:17:10.498920 ignition[752]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:17:10.498950 ignition[752]: op(1): [started] loading QEMU firmware config module Sep 9 00:17:10.498956 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:17:10.509968 ignition[752]: op(1): [finished] loading QEMU firmware config module Sep 9 00:17:10.512354 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:17:10.515366 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:17:10.555393 ignition[752]: parsing config with SHA512: f24339cc01907aab826e837ec626ab914878ccd10d6e3c11d38c68717815d7a1708895d894cb13e37154ec49753ad7cea925c0581dee5b7950c0e2d836f33353 Sep 9 00:17:10.559353 unknown[752]: fetched base config from "system" Sep 9 00:17:10.559367 unknown[752]: fetched user config from "qemu" Sep 9 00:17:10.560152 ignition[752]: fetch-offline: fetch-offline passed Sep 9 00:17:10.560215 ignition[752]: Ignition finished successfully Sep 9 00:17:10.563472 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:17:10.581125 systemd-networkd[858]: lo: Link UP Sep 9 00:17:10.581136 systemd-networkd[858]: lo: Gained carrier Sep 9 00:17:10.583229 systemd-networkd[858]: Enumeration completed Sep 9 00:17:10.583342 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:17:10.583742 systemd-networkd[858]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:17:10.583748 systemd-networkd[858]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:17:10.585322 systemd-networkd[858]: eth0: Link UP Sep 9 00:17:10.630435 systemd-networkd[858]: eth0: Gained carrier Sep 9 00:17:10.630456 systemd-networkd[858]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:17:10.631205 systemd[1]: Reached target network.target - Network. Sep 9 00:17:10.631896 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:17:10.634809 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:17:10.652214 systemd-networkd[858]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:17:10.673052 ignition[862]: Ignition 2.21.0 Sep 9 00:17:10.673066 ignition[862]: Stage: kargs Sep 9 00:17:10.673258 ignition[862]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:17:10.673271 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:17:10.676498 ignition[862]: kargs: kargs passed Sep 9 00:17:10.676611 ignition[862]: Ignition finished successfully Sep 9 00:17:10.681806 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:17:10.684125 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:17:10.711185 ignition[871]: Ignition 2.21.0 Sep 9 00:17:10.711201 ignition[871]: Stage: disks Sep 9 00:17:10.711586 ignition[871]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:17:10.711601 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:17:10.713186 ignition[871]: disks: disks passed Sep 9 00:17:10.713288 ignition[871]: Ignition finished successfully Sep 9 00:17:10.717691 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:17:10.718967 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:17:10.720891 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:17:10.721161 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:17:10.721514 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:17:10.721876 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:17:10.729780 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:17:10.761831 systemd-fsck[881]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 00:17:11.104586 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:17:11.108084 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:17:11.262140 kernel: EXT4-fs (vda9): mounted filesystem 4b59fff7-9272-4156-91f8-37989d927dc6 r/w with ordered data mode. Quota mode: none. Sep 9 00:17:11.262984 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:17:11.264566 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:17:11.267293 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:17:11.269322 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:17:11.270498 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:17:11.270542 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:17:11.270567 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:17:11.284632 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:17:11.288787 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:17:11.291237 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (889) Sep 9 00:17:11.291271 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:17:11.293224 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:17:11.296835 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:17:11.296876 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:17:11.299559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:17:11.329890 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:17:11.334371 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:17:11.340398 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:17:11.344570 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:17:11.441926 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:17:11.478406 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:17:11.481574 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:17:11.505211 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:17:11.506238 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:17:11.522362 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:17:11.539086 ignition[1003]: INFO : Ignition 2.21.0 Sep 9 00:17:11.539086 ignition[1003]: INFO : Stage: mount Sep 9 00:17:11.541151 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:17:11.541151 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:17:11.541151 ignition[1003]: INFO : mount: mount passed Sep 9 00:17:11.547566 ignition[1003]: INFO : Ignition finished successfully Sep 9 00:17:11.545030 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:17:11.548068 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:17:11.579810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:17:11.624510 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Sep 9 00:17:11.624563 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:17:11.624579 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:17:11.629121 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:17:11.629212 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:17:11.631067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:17:11.667455 ignition[1032]: INFO : Ignition 2.21.0 Sep 9 00:17:11.667455 ignition[1032]: INFO : Stage: files Sep 9 00:17:11.669273 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:17:11.669273 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:17:11.672981 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:17:11.674386 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:17:11.674386 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:17:11.679428 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:17:11.681020 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:17:11.681020 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:17:11.680301 unknown[1032]: wrote ssh authorized keys file for user: core Sep 9 00:17:11.743189 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 00:17:11.743189 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 00:17:11.804647 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:17:12.184049 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 00:17:12.184049 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:17:12.188499 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 00:17:12.400226 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:17:12.518776 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 00:17:12.518776 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:17:12.522339 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:17:12.523998 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:17:12.525739 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:17:12.527371 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:17:12.529054 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:17:12.530729 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:17:12.532871 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:17:12.622278 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:17:12.624213 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:17:12.625933 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:17:12.673350 systemd-networkd[858]: eth0: Gained IPv6LL Sep 9 00:17:12.750453 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:17:12.750453 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:17:12.755852 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 00:17:13.430219 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:17:14.532371 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:17:14.532371 ignition[1032]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 00:17:14.536223 ignition[1032]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:17:15.492935 ignition[1032]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:17:15.492935 ignition[1032]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 00:17:15.492935 ignition[1032]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 00:17:15.492935 ignition[1032]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:17:15.500066 ignition[1032]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:17:15.500066 ignition[1032]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 00:17:15.500066 ignition[1032]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:17:15.641863 ignition[1032]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:17:15.647322 ignition[1032]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:17:15.649070 ignition[1032]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:17:15.649070 ignition[1032]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:17:15.649070 ignition[1032]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:17:15.649070 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:17:15.649070 ignition[1032]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:17:15.649070 ignition[1032]: INFO : files: files passed Sep 9 00:17:15.649070 ignition[1032]: INFO : Ignition finished successfully Sep 9 00:17:15.662374 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:17:15.665211 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:17:15.667824 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:17:15.686584 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:17:15.686739 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:17:15.690752 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:17:15.694971 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:17:15.696850 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:17:15.698513 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:17:15.702133 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:17:15.703616 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:17:15.706965 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:17:15.768704 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:17:15.768872 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:17:15.771265 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:17:15.773376 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:17:15.777314 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:17:15.778530 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:17:15.813293 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:17:15.815007 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:17:15.836571 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:17:15.839236 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:17:15.839444 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:17:15.843559 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:17:15.843749 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:17:15.847453 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:17:15.849793 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:17:15.849978 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:17:15.855086 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:17:15.855303 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:17:15.857705 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:17:15.862150 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:17:15.862322 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:17:15.867145 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:17:15.867315 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:17:15.871533 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:17:15.871684 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:17:15.871854 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:17:15.876289 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:17:15.876482 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:17:15.878394 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:17:15.880451 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:17:15.880729 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:17:15.880877 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:17:15.886332 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:17:15.886497 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:17:15.889607 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:17:15.889752 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:17:15.895210 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:17:15.898193 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:17:15.900147 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:17:15.900353 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:17:15.900486 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:17:15.904073 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:17:15.904227 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:17:15.906170 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:17:15.906321 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:17:15.907187 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:17:15.907305 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:17:15.908915 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:17:15.917040 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:17:15.918880 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:17:15.919082 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:17:15.920274 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:17:15.920423 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:17:15.929879 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:17:15.930001 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:17:15.948970 ignition[1088]: INFO : Ignition 2.21.0 Sep 9 00:17:15.950465 ignition[1088]: INFO : Stage: umount Sep 9 00:17:15.950465 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:17:15.950465 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:17:15.954003 ignition[1088]: INFO : umount: umount passed Sep 9 00:17:15.954003 ignition[1088]: INFO : Ignition finished successfully Sep 9 00:17:15.955123 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:17:15.955914 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:17:15.956065 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:17:15.956855 systemd[1]: Stopped target network.target - Network. Sep 9 00:17:15.960507 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:17:15.960577 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:17:15.961569 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:17:15.961674 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:17:15.964601 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:17:15.964680 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:17:15.965763 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:17:15.965823 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:17:15.966244 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:17:15.966674 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:17:15.973052 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:17:15.973244 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:17:15.975948 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:17:15.976231 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:17:15.982165 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:17:15.983169 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:17:15.983307 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:17:15.983572 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:17:15.983647 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:17:15.989435 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:17:15.995680 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:17:15.995833 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:17:16.000414 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:17:16.000641 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:17:16.001809 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:17:16.001859 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:17:16.007773 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:17:16.007862 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:17:16.007945 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:17:16.011562 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:17:16.011633 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:17:16.016376 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:17:16.016432 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:17:16.017641 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:17:16.019014 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:17:16.030950 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:17:16.032342 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:17:16.035871 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:17:16.035965 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:17:16.036297 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:17:16.036338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:17:16.036632 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:17:16.036693 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:17:16.037554 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:17:16.037600 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:17:16.046601 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:17:16.046749 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:17:16.056965 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:17:16.058313 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:17:16.058375 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:17:16.061320 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:17:16.061382 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:17:16.066377 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 00:17:16.066476 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:17:16.069563 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:17:16.069646 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:17:16.071447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:17:16.071543 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:17:16.079489 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:17:16.079649 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:17:16.083469 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:17:16.083624 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:17:16.087714 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:17:16.090360 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:17:16.115378 systemd[1]: Switching root. Sep 9 00:17:16.162606 systemd-journald[220]: Journal stopped Sep 9 00:17:17.573061 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 9 00:17:17.573182 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:17:17.573203 kernel: SELinux: policy capability open_perms=1 Sep 9 00:17:17.573218 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:17:17.573234 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:17:17.573249 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:17:17.573265 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:17:17.573281 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:17:17.573296 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:17:17.573318 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:17:17.573334 kernel: audit: type=1403 audit(1757377036.554:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:17:17.573369 systemd[1]: Successfully loaded SELinux policy in 53.359ms. Sep 9 00:17:17.573395 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.908ms. Sep 9 00:17:17.573413 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:17:17.573430 systemd[1]: Detected virtualization kvm. Sep 9 00:17:17.573447 systemd[1]: Detected architecture x86-64. Sep 9 00:17:17.573462 systemd[1]: Detected first boot. Sep 9 00:17:17.573479 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:17:17.573502 zram_generator::config[1133]: No configuration found. Sep 9 00:17:17.573520 kernel: Guest personality initialized and is inactive Sep 9 00:17:17.573536 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:17:17.573551 kernel: Initialized host personality Sep 9 00:17:17.573566 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:17:17.573594 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:17:17.573619 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:17:17.573636 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:17:17.573659 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:17:17.573679 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:17:17.573699 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:17:17.573719 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:17:17.573740 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:17:17.573760 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:17:17.573792 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:17:17.573812 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:17:17.573829 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:17:17.573852 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:17:17.573868 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:17:17.573884 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:17:17.573900 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:17:17.573915 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:17:17.573931 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:17:17.573954 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:17:17.573978 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:17:17.573994 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:17:17.574011 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:17:17.574028 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:17:17.574043 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:17:17.574060 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:17:17.574077 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:17:17.574111 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:17:17.574130 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:17:17.574151 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:17:17.574175 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:17:17.574202 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:17:17.574219 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:17:17.574236 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:17:17.574252 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:17:17.574269 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:17:17.574285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:17:17.574302 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:17:17.574318 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:17:17.574342 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:17:17.574360 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:17:17.574376 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:17:17.574393 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:17:17.574409 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:17:17.574425 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:17:17.574443 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:17:17.574460 systemd[1]: Reached target machines.target - Containers. Sep 9 00:17:17.574483 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:17:17.574499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:17:17.574514 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:17:17.574530 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:17:17.574553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:17:17.574591 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:17:17.574607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:17:17.574622 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:17:17.574638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:17:17.574661 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:17:17.574677 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:17:17.574692 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:17:17.574707 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:17:17.574723 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:17:17.574740 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:17:17.574757 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:17:17.574776 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:17:17.574809 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:17:17.574829 kernel: loop: module loaded Sep 9 00:17:17.574848 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:17:17.574867 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:17:17.574887 kernel: fuse: init (API version 7.41) Sep 9 00:17:17.574957 systemd-journald[1197]: Collecting audit messages is disabled. Sep 9 00:17:17.574990 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:17:17.575016 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:17:17.575033 systemd[1]: Stopped verity-setup.service. Sep 9 00:17:17.575075 systemd-journald[1197]: Journal started Sep 9 00:17:17.575122 systemd-journald[1197]: Runtime Journal (/run/log/journal/eb06a26396924df7a27d32418e9d0f36) is 6M, max 48.6M, 42.5M free. Sep 9 00:17:17.129341 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:17:17.155274 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:17:17.155882 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:17:17.578125 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:17:17.585970 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:17:17.587137 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:17:17.588755 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:17:17.590196 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:17:17.591527 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:17:17.593005 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:17:17.594538 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:17:17.607130 kernel: ACPI: bus type drm_connector registered Sep 9 00:17:17.608404 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:17:17.610356 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:17:17.610632 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:17:17.612311 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:17:17.612640 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:17:17.614432 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:17:17.614701 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:17:17.617300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:17:17.617683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:17:17.619532 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:17:17.619799 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:17:17.621384 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:17:17.621661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:17:17.623474 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:17:17.625158 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:17:17.626920 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:17:17.628763 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:17:17.649631 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:17:17.653523 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:17:17.656738 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:17:17.658086 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:17:17.658132 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:17:17.660504 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:17:17.674220 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:17:17.764030 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:17:17.766212 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:17:17.768680 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:17:17.770002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:17:17.772267 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:17:17.784923 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:17:17.789182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:17:17.793114 systemd-journald[1197]: Time spent on flushing to /var/log/journal/eb06a26396924df7a27d32418e9d0f36 is 20.960ms for 983 entries. Sep 9 00:17:17.793114 systemd-journald[1197]: System Journal (/var/log/journal/eb06a26396924df7a27d32418e9d0f36) is 8M, max 195.6M, 187.6M free. Sep 9 00:17:18.754197 systemd-journald[1197]: Received client request to flush runtime journal. Sep 9 00:17:18.754284 kernel: loop0: detected capacity change from 0 to 146240 Sep 9 00:17:18.754312 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:17:18.754334 kernel: loop1: detected capacity change from 0 to 221472 Sep 9 00:17:18.754356 kernel: loop2: detected capacity change from 0 to 113872 Sep 9 00:17:18.754378 kernel: loop3: detected capacity change from 0 to 146240 Sep 9 00:17:18.754397 kernel: loop4: detected capacity change from 0 to 221472 Sep 9 00:17:18.754421 kernel: loop5: detected capacity change from 0 to 113872 Sep 9 00:17:18.754439 zram_generator::config[1291]: No configuration found. Sep 9 00:17:17.793727 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:17:17.798260 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:17:17.802375 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:17:17.802674 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:17:17.803127 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:17:17.959421 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Sep 9 00:17:17.959435 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Sep 9 00:17:17.960972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:17:17.965736 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:17:18.080032 (sd-merge)[1257]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:17:18.081017 (sd-merge)[1257]: Merged extensions into '/usr'. Sep 9 00:17:18.281291 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:17:18.281307 systemd[1]: Reloading... Sep 9 00:17:18.541559 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:17:18.624831 systemd[1]: Reloading finished in 343 ms. Sep 9 00:17:18.655464 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:17:18.657236 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:17:18.661647 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:17:18.706532 systemd[1]: Starting ensure-sysext.service... Sep 9 00:17:18.709332 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:17:18.729977 systemd[1]: Reload requested from client PID 1329 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:17:18.729990 systemd[1]: Reloading... Sep 9 00:17:18.790135 zram_generator::config[1357]: No configuration found. Sep 9 00:17:18.890346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:17:18.972476 systemd[1]: Reloading finished in 242 ms. Sep 9 00:17:18.991725 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:17:18.993484 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:17:19.030140 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:17:19.032944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:17:19.033152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:17:19.034436 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:17:19.047018 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:17:19.048433 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:17:19.052280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:17:19.053400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:17:19.053508 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:17:19.053633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:17:19.055669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:17:19.055882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:17:19.058684 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:17:19.058895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:17:19.061486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:17:19.061741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:17:19.067205 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:17:19.067397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:17:19.068820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:17:19.071006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:17:19.133315 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:17:19.134547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:17:19.134663 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:17:19.134792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:17:19.137799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:17:19.138047 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:17:19.139685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:17:19.139911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:17:19.142589 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:17:19.142827 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:17:19.149050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:17:19.149401 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:17:19.150706 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:17:19.152850 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:17:19.162613 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:17:19.165635 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:17:19.167000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:17:19.167117 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:17:19.167274 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:17:19.168488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:17:19.168702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:17:19.170764 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:17:19.171008 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:17:19.172691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:17:19.172936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:17:19.174676 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:17:19.174883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:17:19.179336 systemd[1]: Finished ensure-sysext.service. Sep 9 00:17:19.184078 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:17:19.184288 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:17:19.389179 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:17:19.564060 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:17:19.566801 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:17:19.567583 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:17:19.571620 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:17:19.574155 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:17:19.608742 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:17:19.608804 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:17:19.609251 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:17:19.609408 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:17:19.610859 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:17:19.611896 systemd-tmpfiles[1426]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:17:19.612273 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Sep 9 00:17:19.612375 systemd-tmpfiles[1426]: ACLs are not supported, ignoring. Sep 9 00:17:19.615129 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Sep 9 00:17:19.615145 systemd-tmpfiles[1425]: ACLs are not supported, ignoring. Sep 9 00:17:19.617204 systemd-tmpfiles[1426]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:17:19.617217 systemd-tmpfiles[1426]: Skipping /boot Sep 9 00:17:19.621622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:17:19.624898 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:17:19.635750 systemd-tmpfiles[1426]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:17:19.635764 systemd-tmpfiles[1426]: Skipping /boot Sep 9 00:17:19.679595 systemd-udevd[1431]: Using default interface naming scheme 'v255'. Sep 9 00:17:19.702638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:17:19.708146 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:17:19.725463 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:17:19.799222 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:17:19.867373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:17:19.875919 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:17:19.879677 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:17:19.885453 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:17:19.897966 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:17:19.902347 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:17:19.906353 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:17:19.908453 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:17:19.926070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:17:19.930172 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:17:19.952845 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:17:19.945443 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:17:19.954324 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:17:19.978147 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:17:19.981928 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:17:19.993904 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:17:20.000119 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:17:20.003044 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:17:20.005335 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:17:20.007124 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:17:20.014366 augenrules[1511]: No rules Sep 9 00:17:20.015476 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:17:20.015777 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:17:20.103381 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:17:20.103751 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:17:20.159684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:17:20.226619 systemd-networkd[1439]: lo: Link UP Sep 9 00:17:20.226631 systemd-networkd[1439]: lo: Gained carrier Sep 9 00:17:20.228566 systemd-networkd[1439]: Enumeration completed Sep 9 00:17:20.228678 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:17:20.229245 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:17:20.229251 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:17:20.230521 systemd-networkd[1439]: eth0: Link UP Sep 9 00:17:20.231284 systemd-networkd[1439]: eth0: Gained carrier Sep 9 00:17:20.231320 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:17:20.235249 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:17:20.241416 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:17:20.248215 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:17:20.271749 systemd-resolved[1477]: Positive Trust Anchors: Sep 9 00:17:20.271770 systemd-resolved[1477]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:17:20.271810 systemd-resolved[1477]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:17:20.275820 systemd-resolved[1477]: Defaulting to hostname 'linux'. Sep 9 00:17:20.277790 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:17:20.278173 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:17:20.278476 systemd[1]: Reached target network.target - Network. Sep 9 00:17:20.278752 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:17:20.283915 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:17:20.284019 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:17:21.464700 systemd-resolved[1477]: Clock change detected. Flushing caches. Sep 9 00:17:21.464851 systemd-timesyncd[1478]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:17:21.464973 systemd-timesyncd[1478]: Initial clock synchronization to Tue 2025-09-09 00:17:21.464636 UTC. Sep 9 00:17:21.487217 kernel: kvm_amd: TSC scaling supported Sep 9 00:17:21.487300 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:17:21.487345 kernel: kvm_amd: Nested Paging enabled Sep 9 00:17:21.488163 kernel: kvm_amd: LBR virtualization supported Sep 9 00:17:21.488214 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:17:21.489203 kernel: kvm_amd: Virtual GIF supported Sep 9 00:17:21.515071 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:17:21.541252 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:17:21.542954 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:17:21.544228 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:17:21.545507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:17:21.546784 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:17:21.548144 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:17:21.549377 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:17:21.550637 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:17:21.551919 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:17:21.551962 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:17:21.552934 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:17:21.555355 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:17:21.559107 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:17:21.563664 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:17:21.565174 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:17:21.566471 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:17:21.576609 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:17:21.578254 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:17:21.580125 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:17:21.581965 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:17:21.583156 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:17:21.584338 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:17:21.584368 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:17:21.585528 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:17:21.587864 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:17:21.590609 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:17:21.593505 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:17:21.596361 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:17:21.597678 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:17:21.600249 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:17:21.604314 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:17:21.608325 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:17:21.612091 jq[1549]: false Sep 9 00:17:21.612297 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:17:21.617158 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Refreshing passwd entry cache Sep 9 00:17:21.617313 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:17:21.617510 oslogin_cache_refresh[1551]: Refreshing passwd entry cache Sep 9 00:17:21.619209 extend-filesystems[1550]: Found /dev/vda6 Sep 9 00:17:21.626638 extend-filesystems[1550]: Found /dev/vda9 Sep 9 00:17:21.629000 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:17:21.629897 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Failure getting users, quitting Sep 9 00:17:21.629897 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:17:21.629795 oslogin_cache_refresh[1551]: Failure getting users, quitting Sep 9 00:17:21.629824 oslogin_cache_refresh[1551]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:17:21.632580 extend-filesystems[1550]: Checking size of /dev/vda9 Sep 9 00:17:21.631412 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:17:21.630295 oslogin_cache_refresh[1551]: Refreshing group entry cache Sep 9 00:17:21.634824 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Refreshing group entry cache Sep 9 00:17:21.632232 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:17:21.633217 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:17:21.637618 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:17:21.639947 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Failure getting groups, quitting Sep 9 00:17:21.639999 oslogin_cache_refresh[1551]: Failure getting groups, quitting Sep 9 00:17:21.640127 google_oslogin_nss_cache[1551]: oslogin_cache_refresh[1551]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:17:21.640187 oslogin_cache_refresh[1551]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:17:21.644241 extend-filesystems[1550]: Resized partition /dev/vda9 Sep 9 00:17:21.647238 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:17:21.648929 extend-filesystems[1577]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 00:17:21.650996 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:17:21.651361 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:17:21.651784 jq[1572]: true Sep 9 00:17:21.651818 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:17:21.652382 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:17:21.654184 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:17:21.654519 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:17:21.660105 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:17:21.661576 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:17:21.661933 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:17:21.684075 update_engine[1569]: I20250909 00:17:21.683869 1569 main.cc:92] Flatcar Update Engine starting Sep 9 00:17:21.697508 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:17:21.701543 jq[1579]: true Sep 9 00:17:21.710353 tar[1578]: linux-amd64/helm Sep 9 00:17:21.711082 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:17:21.737454 extend-filesystems[1577]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:17:21.737454 extend-filesystems[1577]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:17:21.737454 extend-filesystems[1577]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:17:21.746161 extend-filesystems[1550]: Resized filesystem in /dev/vda9 Sep 9 00:17:21.745849 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:17:21.746213 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:17:21.758571 dbus-daemon[1547]: [system] SELinux support is enabled Sep 9 00:17:21.758798 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:17:21.765300 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:17:21.765349 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:17:21.766989 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:17:21.767022 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:17:21.777505 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:17:21.781466 bash[1610]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:17:21.781638 update_engine[1569]: I20250909 00:17:21.777570 1569 update_check_scheduler.cc:74] Next update check in 2m9s Sep 9 00:17:21.782383 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:17:21.784304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:17:21.785249 systemd-logind[1565]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:17:21.785282 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:17:21.787481 systemd-logind[1565]: New seat seat0. Sep 9 00:17:21.789891 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:17:21.797602 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:17:21.861328 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:17:22.042100 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:17:22.071576 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:17:22.184529 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:17:22.189279 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:17:22.191662 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:40978.service - OpenSSH per-connection server daemon (10.0.0.1:40978). Sep 9 00:17:22.203355 containerd[1580]: time="2025-09-09T00:17:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:17:22.204143 containerd[1580]: time="2025-09-09T00:17:22.204102684Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 9 00:17:22.207666 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:17:22.207962 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:17:22.213512 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:17:22.218076 containerd[1580]: time="2025-09-09T00:17:22.217268138Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.383µs" Sep 9 00:17:22.218076 containerd[1580]: time="2025-09-09T00:17:22.217311520Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:17:22.218076 containerd[1580]: time="2025-09-09T00:17:22.217339412Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:17:22.218076 containerd[1580]: time="2025-09-09T00:17:22.217679530Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:17:22.218076 containerd[1580]: time="2025-09-09T00:17:22.217697083Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:17:22.218076 containerd[1580]: time="2025-09-09T00:17:22.217729594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:17:22.218076 containerd[1580]: time="2025-09-09T00:17:22.217810225Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:17:22.218076 containerd[1580]: time="2025-09-09T00:17:22.217822318Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:17:22.218403 containerd[1580]: time="2025-09-09T00:17:22.218376908Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:17:22.218473 containerd[1580]: time="2025-09-09T00:17:22.218460545Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:17:22.218524 containerd[1580]: time="2025-09-09T00:17:22.218512222Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:17:22.218580 containerd[1580]: time="2025-09-09T00:17:22.218567566Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:17:22.218751 containerd[1580]: time="2025-09-09T00:17:22.218732385Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:17:22.219084 containerd[1580]: time="2025-09-09T00:17:22.219065420Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:17:22.219170 containerd[1580]: time="2025-09-09T00:17:22.219153635Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:17:22.219219 containerd[1580]: time="2025-09-09T00:17:22.219207376Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:17:22.219319 containerd[1580]: time="2025-09-09T00:17:22.219302935Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:17:22.219655 containerd[1580]: time="2025-09-09T00:17:22.219636050Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:17:22.219787 containerd[1580]: time="2025-09-09T00:17:22.219771153Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:17:22.247207 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:17:22.250965 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:17:22.255268 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:17:22.263998 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:17:22.278506 containerd[1580]: time="2025-09-09T00:17:22.278444277Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:17:22.278589 containerd[1580]: time="2025-09-09T00:17:22.278528455Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:17:22.278589 containerd[1580]: time="2025-09-09T00:17:22.278566096Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:17:22.278589 containerd[1580]: time="2025-09-09T00:17:22.278582957Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:17:22.278643 containerd[1580]: time="2025-09-09T00:17:22.278598657Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:17:22.278643 containerd[1580]: time="2025-09-09T00:17:22.278613976Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:17:22.278643 containerd[1580]: time="2025-09-09T00:17:22.278630396Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:17:22.278643 containerd[1580]: time="2025-09-09T00:17:22.278644082Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:17:22.278732 containerd[1580]: time="2025-09-09T00:17:22.278654602Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:17:22.278732 containerd[1580]: time="2025-09-09T00:17:22.278667145Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:17:22.278732 containerd[1580]: time="2025-09-09T00:17:22.278676803Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:17:22.278732 containerd[1580]: time="2025-09-09T00:17:22.278689938Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:17:22.278920 containerd[1580]: time="2025-09-09T00:17:22.278888821Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:17:22.278920 containerd[1580]: time="2025-09-09T00:17:22.278918347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:17:22.278965 containerd[1580]: time="2025-09-09T00:17:22.278932914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:17:22.278965 containerd[1580]: time="2025-09-09T00:17:22.278945918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:17:22.278965 containerd[1580]: time="2025-09-09T00:17:22.278956718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:17:22.279018 containerd[1580]: time="2025-09-09T00:17:22.278991985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:17:22.279018 containerd[1580]: time="2025-09-09T00:17:22.279004759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:17:22.280845 containerd[1580]: time="2025-09-09T00:17:22.279018585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:17:22.280845 containerd[1580]: time="2025-09-09T00:17:22.279090840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:17:22.280845 containerd[1580]: time="2025-09-09T00:17:22.279107932Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:17:22.280845 containerd[1580]: time="2025-09-09T00:17:22.279205205Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:17:22.280845 containerd[1580]: time="2025-09-09T00:17:22.280831735Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:17:22.280954 containerd[1580]: time="2025-09-09T00:17:22.280868344Z" level=info msg="Start snapshots syncer" Sep 9 00:17:22.281102 containerd[1580]: time="2025-09-09T00:17:22.280898671Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:17:22.281501 containerd[1580]: time="2025-09-09T00:17:22.281457529Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:17:22.281647 containerd[1580]: time="2025-09-09T00:17:22.281512342Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:17:22.281647 containerd[1580]: time="2025-09-09T00:17:22.281619363Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:17:22.281792 containerd[1580]: time="2025-09-09T00:17:22.281761239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:17:22.281817 containerd[1580]: time="2025-09-09T00:17:22.281798619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:17:22.281848 containerd[1580]: time="2025-09-09T00:17:22.281811072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:17:22.281848 containerd[1580]: time="2025-09-09T00:17:22.281838454Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:17:22.281895 containerd[1580]: time="2025-09-09T00:17:22.281854364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:17:22.281895 containerd[1580]: time="2025-09-09T00:17:22.281868029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:17:22.281895 containerd[1580]: time="2025-09-09T00:17:22.281878779Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:17:22.281948 containerd[1580]: time="2025-09-09T00:17:22.281910779Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:17:22.281948 containerd[1580]: time="2025-09-09T00:17:22.281925677Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:17:22.281948 containerd[1580]: time="2025-09-09T00:17:22.281939183Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:17:22.282013 containerd[1580]: time="2025-09-09T00:17:22.281997202Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:17:22.282034 containerd[1580]: time="2025-09-09T00:17:22.282014053Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:17:22.282034 containerd[1580]: time="2025-09-09T00:17:22.282026186Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:17:22.282104 containerd[1580]: time="2025-09-09T00:17:22.282060460Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:17:22.282104 containerd[1580]: time="2025-09-09T00:17:22.282071020Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:17:22.282104 containerd[1580]: time="2025-09-09T00:17:22.282084145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:17:22.282104 containerd[1580]: time="2025-09-09T00:17:22.282101918Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:17:22.282180 containerd[1580]: time="2025-09-09T00:17:22.282127696Z" level=info msg="runtime interface created" Sep 9 00:17:22.282180 containerd[1580]: time="2025-09-09T00:17:22.282134028Z" level=info msg="created NRI interface" Sep 9 00:17:22.282180 containerd[1580]: time="2025-09-09T00:17:22.282141993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:17:22.282180 containerd[1580]: time="2025-09-09T00:17:22.282154978Z" level=info msg="Connect containerd service" Sep 9 00:17:22.282259 containerd[1580]: time="2025-09-09T00:17:22.282184232Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:17:22.283556 containerd[1580]: time="2025-09-09T00:17:22.283501373Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:17:22.293101 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 40978 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:17:22.297226 sshd-session[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:22.308756 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:17:22.311845 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:17:22.316415 systemd-logind[1565]: New session 1 of user core. Sep 9 00:17:22.422130 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:17:22.426548 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:17:22.447920 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:17:22.452007 systemd-logind[1565]: New session c1 of user core. Sep 9 00:17:22.563236 containerd[1580]: time="2025-09-09T00:17:22.562943897Z" level=info msg="Start subscribing containerd event" Sep 9 00:17:22.563360 containerd[1580]: time="2025-09-09T00:17:22.563210618Z" level=info msg="Start recovering state" Sep 9 00:17:22.563497 containerd[1580]: time="2025-09-09T00:17:22.563466397Z" level=info msg="Start event monitor" Sep 9 00:17:22.563529 containerd[1580]: time="2025-09-09T00:17:22.563500481Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:17:22.563529 containerd[1580]: time="2025-09-09T00:17:22.563516752Z" level=info msg="Start streaming server" Sep 9 00:17:22.563600 containerd[1580]: time="2025-09-09T00:17:22.563530017Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:17:22.563600 containerd[1580]: time="2025-09-09T00:17:22.563550315Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:17:22.563748 containerd[1580]: time="2025-09-09T00:17:22.563701017Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:17:22.563748 containerd[1580]: time="2025-09-09T00:17:22.563558791Z" level=info msg="runtime interface starting up..." Sep 9 00:17:22.563862 containerd[1580]: time="2025-09-09T00:17:22.563753476Z" level=info msg="starting plugins..." Sep 9 00:17:22.563862 containerd[1580]: time="2025-09-09T00:17:22.563804431Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:17:22.564273 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:17:22.564575 containerd[1580]: time="2025-09-09T00:17:22.564532958Z" level=info msg="containerd successfully booted in 0.361786s" Sep 9 00:17:22.688387 tar[1578]: linux-amd64/LICENSE Sep 9 00:17:22.688387 tar[1578]: linux-amd64/README.md Sep 9 00:17:22.721922 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:17:22.759679 systemd[1653]: Queued start job for default target default.target. Sep 9 00:17:22.782200 systemd[1653]: Created slice app.slice - User Application Slice. Sep 9 00:17:22.782247 systemd[1653]: Reached target paths.target - Paths. Sep 9 00:17:22.782305 systemd[1653]: Reached target timers.target - Timers. Sep 9 00:17:22.784389 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:17:22.798355 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:17:22.798594 systemd[1653]: Reached target sockets.target - Sockets. Sep 9 00:17:22.798648 systemd[1653]: Reached target basic.target - Basic System. Sep 9 00:17:22.798694 systemd[1653]: Reached target default.target - Main User Target. Sep 9 00:17:22.798733 systemd[1653]: Startup finished in 337ms. Sep 9 00:17:22.799698 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:17:22.812205 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:17:22.884719 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:40992.service - OpenSSH per-connection server daemon (10.0.0.1:40992). Sep 9 00:17:22.940344 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 40992 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:17:22.941734 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:22.946261 systemd-logind[1565]: New session 2 of user core. Sep 9 00:17:22.956177 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:17:23.010208 sshd[1675]: Connection closed by 10.0.0.1 port 40992 Sep 9 00:17:23.010592 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:23.022751 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:40992.service: Deactivated successfully. Sep 9 00:17:23.024570 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:17:23.025328 systemd-logind[1565]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:17:23.028080 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:40998.service - OpenSSH per-connection server daemon (10.0.0.1:40998). Sep 9 00:17:23.034159 systemd-logind[1565]: Removed session 2. Sep 9 00:17:23.065195 systemd-networkd[1439]: eth0: Gained IPv6LL Sep 9 00:17:23.068612 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:17:23.070580 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:17:23.073315 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:17:23.075667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:17:23.077843 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:17:23.093088 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 40998 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:17:23.093351 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:23.099728 systemd-logind[1565]: New session 3 of user core. Sep 9 00:17:23.100919 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:17:23.107692 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:17:23.116763 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:17:23.117117 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:17:23.151096 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:17:23.162894 sshd[1700]: Connection closed by 10.0.0.1 port 40998 Sep 9 00:17:23.163174 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:23.167981 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:40998.service: Deactivated successfully. Sep 9 00:17:23.169847 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:17:23.170653 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:17:23.171792 systemd-logind[1565]: Removed session 3. Sep 9 00:17:24.538236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:17:24.540074 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:17:24.541359 systemd[1]: Startup finished in 3.313s (kernel) + 8.912s (initrd) + 6.862s (userspace) = 19.088s. Sep 9 00:17:24.569509 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:17:25.122289 kubelet[1712]: E0909 00:17:25.122211 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:17:25.126836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:17:25.127032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:17:25.127457 systemd[1]: kubelet.service: Consumed 1.807s CPU time, 265.2M memory peak. Sep 9 00:17:33.179542 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:45928.service - OpenSSH per-connection server daemon (10.0.0.1:45928). Sep 9 00:17:33.228453 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 45928 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:17:33.230215 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:33.234764 systemd-logind[1565]: New session 4 of user core. Sep 9 00:17:33.250211 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:17:33.304715 sshd[1727]: Connection closed by 10.0.0.1 port 45928 Sep 9 00:17:33.305151 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:33.333148 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:45928.service: Deactivated successfully. Sep 9 00:17:33.335267 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:17:33.336030 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:17:33.338948 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:45938.service - OpenSSH per-connection server daemon (10.0.0.1:45938). Sep 9 00:17:33.339758 systemd-logind[1565]: Removed session 4. Sep 9 00:17:33.385074 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 45938 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:17:33.386764 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:33.391911 systemd-logind[1565]: New session 5 of user core. Sep 9 00:17:33.401231 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:17:33.451592 sshd[1735]: Connection closed by 10.0.0.1 port 45938 Sep 9 00:17:33.452017 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:33.467352 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:45938.service: Deactivated successfully. Sep 9 00:17:33.469253 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:17:33.469965 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:17:33.472976 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:45948.service - OpenSSH per-connection server daemon (10.0.0.1:45948). Sep 9 00:17:33.473535 systemd-logind[1565]: Removed session 5. Sep 9 00:17:33.534259 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 45948 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:17:33.535905 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:33.540991 systemd-logind[1565]: New session 6 of user core. Sep 9 00:17:33.550216 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:17:33.607950 sshd[1743]: Connection closed by 10.0.0.1 port 45948 Sep 9 00:17:33.608242 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:33.618858 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:45948.service: Deactivated successfully. Sep 9 00:17:33.621248 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:17:33.622189 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:17:33.625291 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:45958.service - OpenSSH per-connection server daemon (10.0.0.1:45958). Sep 9 00:17:33.626188 systemd-logind[1565]: Removed session 6. Sep 9 00:17:33.676459 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 45958 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:17:33.678242 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:33.683616 systemd-logind[1565]: New session 7 of user core. Sep 9 00:17:33.697249 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:17:33.759072 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:17:33.759479 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:17:33.777542 sudo[1753]: pam_unix(sudo:session): session closed for user root Sep 9 00:17:33.779647 sshd[1752]: Connection closed by 10.0.0.1 port 45958 Sep 9 00:17:33.780134 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:33.801346 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:45958.service: Deactivated successfully. Sep 9 00:17:33.803273 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:17:33.804075 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:17:33.807420 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:45972.service - OpenSSH per-connection server daemon (10.0.0.1:45972). Sep 9 00:17:33.808006 systemd-logind[1565]: Removed session 7. Sep 9 00:17:33.867022 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 45972 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:17:33.868831 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:33.874019 systemd-logind[1565]: New session 8 of user core. Sep 9 00:17:33.890301 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:17:33.946990 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:17:33.947364 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:17:33.972502 sudo[1763]: pam_unix(sudo:session): session closed for user root Sep 9 00:17:33.979183 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:17:33.979519 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:17:33.989762 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:17:34.045267 augenrules[1785]: No rules Sep 9 00:17:34.047141 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:17:34.047446 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:17:34.048690 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 9 00:17:34.050343 sshd[1761]: Connection closed by 10.0.0.1 port 45972 Sep 9 00:17:34.050755 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:34.061926 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:45972.service: Deactivated successfully. Sep 9 00:17:34.063981 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:17:34.064753 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:17:34.068077 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:45980.service - OpenSSH per-connection server daemon (10.0.0.1:45980). Sep 9 00:17:34.068697 systemd-logind[1565]: Removed session 8. Sep 9 00:17:34.117838 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 45980 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:17:34.119741 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:34.124714 systemd-logind[1565]: New session 9 of user core. Sep 9 00:17:34.139216 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:17:34.194755 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:17:34.195211 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:17:34.844543 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:17:34.862412 (dockerd)[1819]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:17:35.081800 dockerd[1819]: time="2025-09-09T00:17:35.081699024Z" level=info msg="Starting up" Sep 9 00:17:35.083164 dockerd[1819]: time="2025-09-09T00:17:35.083132182Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:17:35.209463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:17:35.211156 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:17:35.284259 dockerd[1819]: time="2025-09-09T00:17:35.284169470Z" level=info msg="Loading containers: start." Sep 9 00:17:35.376103 kernel: Initializing XFRM netlink socket Sep 9 00:17:35.513403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:17:35.524408 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:17:35.583150 kubelet[1906]: E0909 00:17:35.582907 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:17:35.590109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:17:35.590323 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:17:35.590735 systemd[1]: kubelet.service: Consumed 313ms CPU time, 109.4M memory peak. Sep 9 00:17:35.923643 systemd-networkd[1439]: docker0: Link UP Sep 9 00:17:35.929543 dockerd[1819]: time="2025-09-09T00:17:35.929484707Z" level=info msg="Loading containers: done." Sep 9 00:17:35.944323 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck478933749-merged.mount: Deactivated successfully. Sep 9 00:17:35.947202 dockerd[1819]: time="2025-09-09T00:17:35.947140074Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:17:35.947289 dockerd[1819]: time="2025-09-09T00:17:35.947240181Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 9 00:17:35.947400 dockerd[1819]: time="2025-09-09T00:17:35.947369795Z" level=info msg="Initializing buildkit" Sep 9 00:17:35.981078 dockerd[1819]: time="2025-09-09T00:17:35.980982011Z" level=info msg="Completed buildkit initialization" Sep 9 00:17:35.989087 dockerd[1819]: time="2025-09-09T00:17:35.988991024Z" level=info msg="Daemon has completed initialization" Sep 9 00:17:35.989208 dockerd[1819]: time="2025-09-09T00:17:35.989132800Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:17:35.989310 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:17:36.759182 containerd[1580]: time="2025-09-09T00:17:36.759135124Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 00:17:38.662007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858288605.mount: Deactivated successfully. Sep 9 00:17:39.600080 containerd[1580]: time="2025-09-09T00:17:39.600018439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:39.600737 containerd[1580]: time="2025-09-09T00:17:39.600709085Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 9 00:17:39.601966 containerd[1580]: time="2025-09-09T00:17:39.601914185Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:39.604528 containerd[1580]: time="2025-09-09T00:17:39.604479397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:39.605399 containerd[1580]: time="2025-09-09T00:17:39.605361882Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.846180672s" Sep 9 00:17:39.605399 containerd[1580]: time="2025-09-09T00:17:39.605397158Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 00:17:39.606011 containerd[1580]: time="2025-09-09T00:17:39.605940848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 00:17:40.937782 kernel: hrtimer: interrupt took 2104908 ns Sep 9 00:17:41.680583 containerd[1580]: time="2025-09-09T00:17:41.680495750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:41.681652 containerd[1580]: time="2025-09-09T00:17:41.681593659Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 9 00:17:41.683210 containerd[1580]: time="2025-09-09T00:17:41.683103321Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:41.685846 containerd[1580]: time="2025-09-09T00:17:41.685817472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:41.687064 containerd[1580]: time="2025-09-09T00:17:41.686999599Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 2.080978641s" Sep 9 00:17:41.687129 containerd[1580]: time="2025-09-09T00:17:41.687063079Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 00:17:41.687645 containerd[1580]: time="2025-09-09T00:17:41.687616727Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 00:17:45.840903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:17:45.842753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:17:46.070977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:17:46.087683 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:17:46.398445 kubelet[2116]: E0909 00:17:46.398346 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:17:46.403551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:17:46.403777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:17:46.404185 systemd[1]: kubelet.service: Consumed 236ms CPU time, 109M memory peak. Sep 9 00:17:46.714842 containerd[1580]: time="2025-09-09T00:17:46.714644093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:46.725766 containerd[1580]: time="2025-09-09T00:17:46.725688690Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 9 00:17:46.786872 containerd[1580]: time="2025-09-09T00:17:46.784526453Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:46.857107 containerd[1580]: time="2025-09-09T00:17:46.857011765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:46.858149 containerd[1580]: time="2025-09-09T00:17:46.858109464Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 5.17046234s" Sep 9 00:17:46.858233 containerd[1580]: time="2025-09-09T00:17:46.858154037Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 00:17:46.858969 containerd[1580]: time="2025-09-09T00:17:46.858646240Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 00:17:49.596298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563718657.mount: Deactivated successfully. Sep 9 00:17:50.449906 containerd[1580]: time="2025-09-09T00:17:50.449777045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:50.478290 containerd[1580]: time="2025-09-09T00:17:50.478188156Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 9 00:17:50.506328 containerd[1580]: time="2025-09-09T00:17:50.506236647Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:50.535087 containerd[1580]: time="2025-09-09T00:17:50.534929747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:50.535822 containerd[1580]: time="2025-09-09T00:17:50.535755386Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 3.677029406s" Sep 9 00:17:50.535822 containerd[1580]: time="2025-09-09T00:17:50.535818735Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 00:17:50.536507 containerd[1580]: time="2025-09-09T00:17:50.536460980Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:17:51.262013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025275907.mount: Deactivated successfully. Sep 9 00:17:52.729667 containerd[1580]: time="2025-09-09T00:17:52.729562381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:52.730656 containerd[1580]: time="2025-09-09T00:17:52.730625535Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 00:17:52.732266 containerd[1580]: time="2025-09-09T00:17:52.732226197Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:52.735506 containerd[1580]: time="2025-09-09T00:17:52.735439374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:52.738063 containerd[1580]: time="2025-09-09T00:17:52.738018312Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.201517317s" Sep 9 00:17:52.738120 containerd[1580]: time="2025-09-09T00:17:52.738076090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 00:17:52.738624 containerd[1580]: time="2025-09-09T00:17:52.738552995Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:17:53.622761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397638301.mount: Deactivated successfully. Sep 9 00:17:53.628331 containerd[1580]: time="2025-09-09T00:17:53.628272304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:17:53.629112 containerd[1580]: time="2025-09-09T00:17:53.629026198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:17:53.630351 containerd[1580]: time="2025-09-09T00:17:53.630287444Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:17:53.632551 containerd[1580]: time="2025-09-09T00:17:53.632469046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:17:53.633151 containerd[1580]: time="2025-09-09T00:17:53.633096252Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 894.508241ms" Sep 9 00:17:53.633151 containerd[1580]: time="2025-09-09T00:17:53.633144984Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:17:53.633636 containerd[1580]: time="2025-09-09T00:17:53.633606770Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 00:17:55.207600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730666741.mount: Deactivated successfully. Sep 9 00:17:56.654486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:17:56.657440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:17:57.191492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:17:57.271072 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:17:57.479010 kubelet[2248]: E0909 00:17:57.478829 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:17:57.484089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:17:57.484324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:17:57.484866 systemd[1]: kubelet.service: Consumed 551ms CPU time, 110.7M memory peak. Sep 9 00:17:59.472311 containerd[1580]: time="2025-09-09T00:17:59.472213937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:59.495781 containerd[1580]: time="2025-09-09T00:17:59.495718311Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 9 00:17:59.497353 containerd[1580]: time="2025-09-09T00:17:59.497305514Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:59.551624 containerd[1580]: time="2025-09-09T00:17:59.551530752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:59.552834 containerd[1580]: time="2025-09-09T00:17:59.552800198Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.919164514s" Sep 9 00:17:59.552892 containerd[1580]: time="2025-09-09T00:17:59.552840756Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 00:18:02.442534 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:18:02.442827 systemd[1]: kubelet.service: Consumed 551ms CPU time, 110.7M memory peak. Sep 9 00:18:02.446527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:18:02.478808 systemd[1]: Reload requested from client PID 2288 ('systemctl') (unit session-9.scope)... Sep 9 00:18:02.478829 systemd[1]: Reloading... Sep 9 00:18:02.609289 zram_generator::config[2331]: No configuration found. Sep 9 00:18:03.564772 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:18:03.704655 systemd[1]: Reloading finished in 1225 ms. Sep 9 00:18:03.784036 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:18:03.784166 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:18:03.784484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:18:03.784535 systemd[1]: kubelet.service: Consumed 188ms CPU time, 98.2M memory peak. Sep 9 00:18:03.786476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:18:03.994146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:18:04.013608 (kubelet)[2379]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:18:04.052816 kubelet[2379]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:18:04.052816 kubelet[2379]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:18:04.052816 kubelet[2379]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:18:04.052816 kubelet[2379]: I0909 00:18:04.052237 2379 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:18:04.585781 kubelet[2379]: I0909 00:18:04.585715 2379 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:18:04.585781 kubelet[2379]: I0909 00:18:04.585756 2379 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:18:04.586021 kubelet[2379]: I0909 00:18:04.586000 2379 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:18:04.604626 kubelet[2379]: E0909 00:18:04.604551 2379 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:18:04.607157 kubelet[2379]: I0909 00:18:04.607106 2379 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:18:04.618882 kubelet[2379]: I0909 00:18:04.618835 2379 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:18:04.625651 kubelet[2379]: I0909 00:18:04.625591 2379 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:18:04.626518 kubelet[2379]: I0909 00:18:04.626483 2379 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:18:04.626800 kubelet[2379]: I0909 00:18:04.626754 2379 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:18:04.627026 kubelet[2379]: I0909 00:18:04.626795 2379 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:18:04.627165 kubelet[2379]: I0909 00:18:04.627069 2379 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:18:04.627165 kubelet[2379]: I0909 00:18:04.627086 2379 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:18:04.627284 kubelet[2379]: I0909 00:18:04.627268 2379 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:18:04.629505 kubelet[2379]: I0909 00:18:04.629458 2379 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:18:04.629505 kubelet[2379]: I0909 00:18:04.629498 2379 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:18:04.629626 kubelet[2379]: I0909 00:18:04.629559 2379 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:18:04.629626 kubelet[2379]: I0909 00:18:04.629603 2379 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:18:04.634309 kubelet[2379]: I0909 00:18:04.632840 2379 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:18:04.634309 kubelet[2379]: I0909 00:18:04.633340 2379 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:18:04.634309 kubelet[2379]: W0909 00:18:04.633419 2379 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:18:04.634503 kubelet[2379]: W0909 00:18:04.634366 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 9 00:18:04.634503 kubelet[2379]: E0909 00:18:04.634451 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:18:04.634849 kubelet[2379]: W0909 00:18:04.634792 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 9 00:18:04.634849 kubelet[2379]: E0909 00:18:04.634835 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:18:04.635373 kubelet[2379]: I0909 00:18:04.635351 2379 server.go:1274] "Started kubelet" Sep 9 00:18:04.635496 kubelet[2379]: I0909 00:18:04.635463 2379 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:18:04.636647 kubelet[2379]: I0909 00:18:04.636611 2379 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:18:04.641874 kubelet[2379]: I0909 00:18:04.641834 2379 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:18:04.642464 kubelet[2379]: I0909 00:18:04.642435 2379 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:18:04.642687 kubelet[2379]: I0909 00:18:04.642643 2379 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:18:04.645350 kubelet[2379]: I0909 00:18:04.645327 2379 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:18:04.648270 kubelet[2379]: I0909 00:18:04.648236 2379 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:18:04.648581 kubelet[2379]: E0909 00:18:04.648555 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:18:04.648745 kubelet[2379]: I0909 00:18:04.648728 2379 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:18:04.648894 kubelet[2379]: I0909 00:18:04.648877 2379 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:18:04.650057 kubelet[2379]: W0909 00:18:04.649994 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 9 00:18:04.650171 kubelet[2379]: E0909 00:18:04.650143 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:18:04.650274 kubelet[2379]: E0909 00:18:04.650254 2379 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:18:04.650446 kubelet[2379]: E0909 00:18:04.650392 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Sep 9 00:18:04.652780 kubelet[2379]: E0909 00:18:04.651412 2379 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18637525c2198f7e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:18:04.63532019 +0000 UTC m=+0.616933712,LastTimestamp:2025-09-09 00:18:04.63532019 +0000 UTC m=+0.616933712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:18:04.652780 kubelet[2379]: I0909 00:18:04.652771 2379 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:18:04.652909 kubelet[2379]: I0909 00:18:04.652789 2379 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:18:04.652909 kubelet[2379]: I0909 00:18:04.652869 2379 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:18:04.667425 kubelet[2379]: I0909 00:18:04.667140 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:18:04.668880 kubelet[2379]: I0909 00:18:04.668853 2379 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:18:04.669251 kubelet[2379]: I0909 00:18:04.668889 2379 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:18:04.669251 kubelet[2379]: I0909 00:18:04.668916 2379 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:18:04.669251 kubelet[2379]: E0909 00:18:04.668963 2379 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:18:04.669725 kubelet[2379]: I0909 00:18:04.669687 2379 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:18:04.669725 kubelet[2379]: I0909 00:18:04.669707 2379 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:18:04.669725 kubelet[2379]: I0909 00:18:04.669728 2379 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:18:04.748714 kubelet[2379]: E0909 00:18:04.748633 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:18:04.748932 kubelet[2379]: W0909 00:18:04.748760 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 9 00:18:04.748932 kubelet[2379]: E0909 00:18:04.748859 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:18:04.769206 kubelet[2379]: E0909 00:18:04.769117 2379 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:18:04.813187 kubelet[2379]: I0909 00:18:04.813129 2379 policy_none.go:49] "None policy: Start" Sep 9 00:18:04.814537 kubelet[2379]: I0909 00:18:04.814479 2379 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:18:04.814537 kubelet[2379]: I0909 00:18:04.814539 2379 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:18:04.842288 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:18:04.849326 kubelet[2379]: E0909 00:18:04.849282 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:18:04.851922 kubelet[2379]: E0909 00:18:04.851857 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Sep 9 00:18:04.862908 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:18:04.867054 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:18:04.888844 kubelet[2379]: I0909 00:18:04.888739 2379 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:18:04.893201 kubelet[2379]: I0909 00:18:04.893152 2379 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:18:04.893368 kubelet[2379]: I0909 00:18:04.893192 2379 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:18:04.893783 kubelet[2379]: I0909 00:18:04.893555 2379 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:18:04.894806 kubelet[2379]: E0909 00:18:04.894781 2379 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:18:04.981236 systemd[1]: Created slice kubepods-burstable-podb9711c8a15bce595a207576b2de799b9.slice - libcontainer container kubepods-burstable-podb9711c8a15bce595a207576b2de799b9.slice. Sep 9 00:18:04.995421 kubelet[2379]: I0909 00:18:04.995352 2379 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:18:04.996034 kubelet[2379]: E0909 00:18:04.996008 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 9 00:18:05.004461 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 00:18:05.016372 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 00:18:05.051273 kubelet[2379]: I0909 00:18:05.051207 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9711c8a15bce595a207576b2de799b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9711c8a15bce595a207576b2de799b9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:18:05.051273 kubelet[2379]: I0909 00:18:05.051276 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9711c8a15bce595a207576b2de799b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9711c8a15bce595a207576b2de799b9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:18:05.051499 kubelet[2379]: I0909 00:18:05.051306 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:18:05.051499 kubelet[2379]: I0909 00:18:05.051333 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:18:05.051499 kubelet[2379]: I0909 00:18:05.051361 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:18:05.051499 kubelet[2379]: I0909 00:18:05.051384 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:18:05.051499 kubelet[2379]: I0909 00:18:05.051414 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9711c8a15bce595a207576b2de799b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b9711c8a15bce595a207576b2de799b9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:18:05.051636 kubelet[2379]: I0909 00:18:05.051440 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:18:05.051636 kubelet[2379]: I0909 00:18:05.051464 2379 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:18:05.198141 kubelet[2379]: I0909 00:18:05.198084 2379 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:18:05.198645 kubelet[2379]: E0909 00:18:05.198554 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 9 00:18:05.253560 kubelet[2379]: E0909 00:18:05.253491 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Sep 9 00:18:05.303852 kubelet[2379]: E0909 00:18:05.303794 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:05.304576 containerd[1580]: time="2025-09-09T00:18:05.304533970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b9711c8a15bce595a207576b2de799b9,Namespace:kube-system,Attempt:0,}" Sep 9 00:18:05.314770 kubelet[2379]: E0909 00:18:05.314750 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:05.315155 containerd[1580]: time="2025-09-09T00:18:05.315115904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 00:18:05.322403 kubelet[2379]: E0909 00:18:05.322377 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:05.322684 containerd[1580]: time="2025-09-09T00:18:05.322648645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 00:18:05.600825 kubelet[2379]: I0909 00:18:05.600784 2379 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:18:05.601193 kubelet[2379]: E0909 00:18:05.601165 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 9 00:18:05.804923 kubelet[2379]: W0909 00:18:05.804828 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 9 00:18:05.804923 kubelet[2379]: E0909 00:18:05.804913 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:18:05.880848 kubelet[2379]: W0909 00:18:05.880633 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 9 00:18:05.880848 kubelet[2379]: E0909 00:18:05.880722 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:18:05.959386 kubelet[2379]: W0909 00:18:05.959294 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 9 00:18:05.959386 kubelet[2379]: E0909 00:18:05.959383 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:18:06.030089 containerd[1580]: time="2025-09-09T00:18:06.029404808Z" level=info msg="connecting to shim d46b58f7bcf82cfb300e9a31d60ca5c872b44dd90d2e6dd2aa7dbaf633100cef" address="unix:///run/containerd/s/b77e862a14e4034a5c7962ec94764f88a6d3ce5c613467296d04c1d219b42478" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:18:06.034325 containerd[1580]: time="2025-09-09T00:18:06.034266928Z" level=info msg="connecting to shim 27871ccc06da4fce3306eca1099f92c409fe12b78cc8ebe6ee053cda9d18857c" address="unix:///run/containerd/s/4c1fc7e5fafbdb8e35c760c333550a442e7b69ad36af2abdf5dbce85904faa3a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:18:06.040596 containerd[1580]: time="2025-09-09T00:18:06.040505891Z" level=info msg="connecting to shim 3b6d55a85da8b156067c4950b9ef2935737671dc0a73230b92bdee4967d1d209" address="unix:///run/containerd/s/ad0feae18b09739527705ab6c08b9fcdb02d62437b840b0508e2fcf331f227aa" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:18:06.108537 kubelet[2379]: E0909 00:18:06.108396 2379 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Sep 9 00:18:06.136242 systemd[1]: Started cri-containerd-d46b58f7bcf82cfb300e9a31d60ca5c872b44dd90d2e6dd2aa7dbaf633100cef.scope - libcontainer container d46b58f7bcf82cfb300e9a31d60ca5c872b44dd90d2e6dd2aa7dbaf633100cef. Sep 9 00:18:06.146187 systemd[1]: Started cri-containerd-3b6d55a85da8b156067c4950b9ef2935737671dc0a73230b92bdee4967d1d209.scope - libcontainer container 3b6d55a85da8b156067c4950b9ef2935737671dc0a73230b92bdee4967d1d209. Sep 9 00:18:06.160337 systemd[1]: Started cri-containerd-27871ccc06da4fce3306eca1099f92c409fe12b78cc8ebe6ee053cda9d18857c.scope - libcontainer container 27871ccc06da4fce3306eca1099f92c409fe12b78cc8ebe6ee053cda9d18857c. Sep 9 00:18:06.207072 kubelet[2379]: W0909 00:18:06.206966 2379 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Sep 9 00:18:06.207072 kubelet[2379]: E0909 00:18:06.207072 2379 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:18:06.292297 containerd[1580]: time="2025-09-09T00:18:06.292224916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b6d55a85da8b156067c4950b9ef2935737671dc0a73230b92bdee4967d1d209\"" Sep 9 00:18:06.294023 kubelet[2379]: E0909 00:18:06.293965 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:06.296015 containerd[1580]: time="2025-09-09T00:18:06.295957922Z" level=info msg="CreateContainer within sandbox \"3b6d55a85da8b156067c4950b9ef2935737671dc0a73230b92bdee4967d1d209\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:18:06.360907 containerd[1580]: time="2025-09-09T00:18:06.360845200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b9711c8a15bce595a207576b2de799b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d46b58f7bcf82cfb300e9a31d60ca5c872b44dd90d2e6dd2aa7dbaf633100cef\"" Sep 9 00:18:06.361913 kubelet[2379]: E0909 00:18:06.361882 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:06.371269 containerd[1580]: time="2025-09-09T00:18:06.371198284Z" level=info msg="CreateContainer within sandbox \"d46b58f7bcf82cfb300e9a31d60ca5c872b44dd90d2e6dd2aa7dbaf633100cef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:18:06.403159 kubelet[2379]: I0909 00:18:06.402994 2379 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:18:06.403669 kubelet[2379]: E0909 00:18:06.403484 2379 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Sep 9 00:18:06.412408 containerd[1580]: time="2025-09-09T00:18:06.412337077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"27871ccc06da4fce3306eca1099f92c409fe12b78cc8ebe6ee053cda9d18857c\"" Sep 9 00:18:06.413577 kubelet[2379]: E0909 00:18:06.413506 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:06.415605 containerd[1580]: time="2025-09-09T00:18:06.415560547Z" level=info msg="CreateContainer within sandbox \"27871ccc06da4fce3306eca1099f92c409fe12b78cc8ebe6ee053cda9d18857c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:18:06.426161 containerd[1580]: time="2025-09-09T00:18:06.426099213Z" level=info msg="Container 0d8f78f47c2ff2e677ca856ba5354aea26690cd7532d4055a43c8615465de8c8: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:06.430630 containerd[1580]: time="2025-09-09T00:18:06.430552697Z" level=info msg="Container 4a773a697341e81ab2c56b905150e1686c9df922ba989b8c522bb0a222bc3307: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:06.434262 containerd[1580]: time="2025-09-09T00:18:06.434197135Z" level=info msg="Container 0cdfc1df4baf25432cc8dafe87580e1e0722cffff4e57d2275330c3a4a530ebb: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:06.442087 containerd[1580]: time="2025-09-09T00:18:06.442007331Z" level=info msg="CreateContainer within sandbox \"d46b58f7bcf82cfb300e9a31d60ca5c872b44dd90d2e6dd2aa7dbaf633100cef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4a773a697341e81ab2c56b905150e1686c9df922ba989b8c522bb0a222bc3307\"" Sep 9 00:18:06.442954 containerd[1580]: time="2025-09-09T00:18:06.442906378Z" level=info msg="StartContainer for \"4a773a697341e81ab2c56b905150e1686c9df922ba989b8c522bb0a222bc3307\"" Sep 9 00:18:06.444481 containerd[1580]: time="2025-09-09T00:18:06.444435350Z" level=info msg="CreateContainer within sandbox \"3b6d55a85da8b156067c4950b9ef2935737671dc0a73230b92bdee4967d1d209\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0d8f78f47c2ff2e677ca856ba5354aea26690cd7532d4055a43c8615465de8c8\"" Sep 9 00:18:06.444689 containerd[1580]: time="2025-09-09T00:18:06.444637675Z" level=info msg="connecting to shim 4a773a697341e81ab2c56b905150e1686c9df922ba989b8c522bb0a222bc3307" address="unix:///run/containerd/s/b77e862a14e4034a5c7962ec94764f88a6d3ce5c613467296d04c1d219b42478" protocol=ttrpc version=3 Sep 9 00:18:06.445212 containerd[1580]: time="2025-09-09T00:18:06.445184062Z" level=info msg="StartContainer for \"0d8f78f47c2ff2e677ca856ba5354aea26690cd7532d4055a43c8615465de8c8\"" Sep 9 00:18:06.447090 containerd[1580]: time="2025-09-09T00:18:06.447036238Z" level=info msg="connecting to shim 0d8f78f47c2ff2e677ca856ba5354aea26690cd7532d4055a43c8615465de8c8" address="unix:///run/containerd/s/ad0feae18b09739527705ab6c08b9fcdb02d62437b840b0508e2fcf331f227aa" protocol=ttrpc version=3 Sep 9 00:18:06.450837 containerd[1580]: time="2025-09-09T00:18:06.450793771Z" level=info msg="CreateContainer within sandbox \"27871ccc06da4fce3306eca1099f92c409fe12b78cc8ebe6ee053cda9d18857c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0cdfc1df4baf25432cc8dafe87580e1e0722cffff4e57d2275330c3a4a530ebb\"" Sep 9 00:18:06.451608 containerd[1580]: time="2025-09-09T00:18:06.451534006Z" level=info msg="StartContainer for \"0cdfc1df4baf25432cc8dafe87580e1e0722cffff4e57d2275330c3a4a530ebb\"" Sep 9 00:18:06.452906 containerd[1580]: time="2025-09-09T00:18:06.452868399Z" level=info msg="connecting to shim 0cdfc1df4baf25432cc8dafe87580e1e0722cffff4e57d2275330c3a4a530ebb" address="unix:///run/containerd/s/4c1fc7e5fafbdb8e35c760c333550a442e7b69ad36af2abdf5dbce85904faa3a" protocol=ttrpc version=3 Sep 9 00:18:06.470392 systemd[1]: Started cri-containerd-4a773a697341e81ab2c56b905150e1686c9df922ba989b8c522bb0a222bc3307.scope - libcontainer container 4a773a697341e81ab2c56b905150e1686c9df922ba989b8c522bb0a222bc3307. Sep 9 00:18:06.476249 systemd[1]: Started cri-containerd-0d8f78f47c2ff2e677ca856ba5354aea26690cd7532d4055a43c8615465de8c8.scope - libcontainer container 0d8f78f47c2ff2e677ca856ba5354aea26690cd7532d4055a43c8615465de8c8. Sep 9 00:18:06.491408 systemd[1]: Started cri-containerd-0cdfc1df4baf25432cc8dafe87580e1e0722cffff4e57d2275330c3a4a530ebb.scope - libcontainer container 0cdfc1df4baf25432cc8dafe87580e1e0722cffff4e57d2275330c3a4a530ebb. Sep 9 00:18:06.597424 containerd[1580]: time="2025-09-09T00:18:06.597155543Z" level=info msg="StartContainer for \"0d8f78f47c2ff2e677ca856ba5354aea26690cd7532d4055a43c8615465de8c8\" returns successfully" Sep 9 00:18:06.599691 containerd[1580]: time="2025-09-09T00:18:06.599642674Z" level=info msg="StartContainer for \"0cdfc1df4baf25432cc8dafe87580e1e0722cffff4e57d2275330c3a4a530ebb\" returns successfully" Sep 9 00:18:06.639706 containerd[1580]: time="2025-09-09T00:18:06.639663728Z" level=info msg="StartContainer for \"4a773a697341e81ab2c56b905150e1686c9df922ba989b8c522bb0a222bc3307\" returns successfully" Sep 9 00:18:06.645118 kubelet[2379]: E0909 00:18:06.645013 2379 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.54:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:18:06.687138 kubelet[2379]: E0909 00:18:06.686741 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:06.691601 kubelet[2379]: E0909 00:18:06.691569 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:06.696095 kubelet[2379]: E0909 00:18:06.696001 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:07.453302 update_engine[1569]: I20250909 00:18:07.453153 1569 update_attempter.cc:509] Updating boot flags... Sep 9 00:18:07.705247 kubelet[2379]: E0909 00:18:07.704427 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:07.709418 kubelet[2379]: E0909 00:18:07.709344 2379 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:08.006205 kubelet[2379]: I0909 00:18:08.006032 2379 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:18:08.492333 kubelet[2379]: E0909 00:18:08.492241 2379 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:18:08.595431 kubelet[2379]: I0909 00:18:08.595081 2379 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:18:08.595431 kubelet[2379]: E0909 00:18:08.595138 2379 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:18:08.622294 kubelet[2379]: E0909 00:18:08.622218 2379 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:18:09.632777 kubelet[2379]: I0909 00:18:09.632690 2379 apiserver.go:52] "Watching apiserver" Sep 9 00:18:09.649242 kubelet[2379]: I0909 00:18:09.649168 2379 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:18:13.672059 systemd[1]: Reload requested from client PID 2668 ('systemctl') (unit session-9.scope)... Sep 9 00:18:13.672082 systemd[1]: Reloading... Sep 9 00:18:13.895153 zram_generator::config[2714]: No configuration found. Sep 9 00:18:13.995005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:18:14.136303 systemd[1]: Reloading finished in 463 ms. Sep 9 00:18:14.168945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:18:14.187057 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:18:14.187463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:18:14.187537 systemd[1]: kubelet.service: Consumed 1.089s CPU time, 133.3M memory peak. Sep 9 00:18:14.190424 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:18:14.462923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:18:14.475784 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:18:14.541148 kubelet[2756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:18:14.541148 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:18:14.541148 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:18:14.541720 kubelet[2756]: I0909 00:18:14.541254 2756 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:18:14.554337 kubelet[2756]: I0909 00:18:14.554257 2756 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:18:14.554337 kubelet[2756]: I0909 00:18:14.554329 2756 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:18:14.554810 kubelet[2756]: I0909 00:18:14.554775 2756 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:18:14.556264 kubelet[2756]: I0909 00:18:14.556238 2756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:18:14.559005 kubelet[2756]: I0909 00:18:14.558964 2756 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:18:14.567930 kubelet[2756]: I0909 00:18:14.566196 2756 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:18:14.575834 kubelet[2756]: I0909 00:18:14.575779 2756 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:18:14.576025 kubelet[2756]: I0909 00:18:14.575959 2756 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:18:14.577034 kubelet[2756]: I0909 00:18:14.576198 2756 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:18:14.577034 kubelet[2756]: I0909 00:18:14.576245 2756 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:18:14.577034 kubelet[2756]: I0909 00:18:14.576479 2756 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:18:14.577034 kubelet[2756]: I0909 00:18:14.576497 2756 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:18:14.577271 kubelet[2756]: I0909 00:18:14.576534 2756 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:18:14.577271 kubelet[2756]: I0909 00:18:14.576756 2756 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:18:14.577271 kubelet[2756]: I0909 00:18:14.576773 2756 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:18:14.577271 kubelet[2756]: I0909 00:18:14.576814 2756 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:18:14.577271 kubelet[2756]: I0909 00:18:14.576828 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:18:14.579135 kubelet[2756]: I0909 00:18:14.579111 2756 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:18:14.580116 kubelet[2756]: I0909 00:18:14.580011 2756 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:18:14.583345 kubelet[2756]: I0909 00:18:14.582432 2756 server.go:1274] "Started kubelet" Sep 9 00:18:14.583701 kubelet[2756]: I0909 00:18:14.583593 2756 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:18:14.584924 kubelet[2756]: I0909 00:18:14.584903 2756 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:18:14.586994 kubelet[2756]: I0909 00:18:14.585033 2756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:18:14.588303 kubelet[2756]: I0909 00:18:14.588282 2756 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:18:14.589171 kubelet[2756]: I0909 00:18:14.589156 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:18:14.593620 kubelet[2756]: I0909 00:18:14.593570 2756 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:18:14.595909 kubelet[2756]: I0909 00:18:14.595633 2756 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:18:14.595909 kubelet[2756]: I0909 00:18:14.595781 2756 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:18:14.596099 kubelet[2756]: I0909 00:18:14.595968 2756 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:18:14.600867 kubelet[2756]: E0909 00:18:14.600268 2756 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:18:14.601032 kubelet[2756]: I0909 00:18:14.600878 2756 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:18:14.603763 kubelet[2756]: I0909 00:18:14.603683 2756 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:18:14.603763 kubelet[2756]: I0909 00:18:14.603711 2756 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:18:14.611502 kubelet[2756]: I0909 00:18:14.611440 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:18:14.616495 sudo[2778]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 00:18:14.616962 sudo[2778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 00:18:14.618554 kubelet[2756]: I0909 00:18:14.618082 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:18:14.618554 kubelet[2756]: I0909 00:18:14.618126 2756 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:18:14.618554 kubelet[2756]: I0909 00:18:14.618152 2756 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:18:14.618554 kubelet[2756]: E0909 00:18:14.618209 2756 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:18:14.664968 kubelet[2756]: I0909 00:18:14.664930 2756 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:18:14.664968 kubelet[2756]: I0909 00:18:14.664955 2756 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:18:14.664968 kubelet[2756]: I0909 00:18:14.664978 2756 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:18:14.665231 kubelet[2756]: I0909 00:18:14.665170 2756 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:18:14.665231 kubelet[2756]: I0909 00:18:14.665182 2756 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:18:14.665231 kubelet[2756]: I0909 00:18:14.665204 2756 policy_none.go:49] "None policy: Start" Sep 9 00:18:14.665977 kubelet[2756]: I0909 00:18:14.665953 2756 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:18:14.666025 kubelet[2756]: I0909 00:18:14.665989 2756 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:18:14.666188 kubelet[2756]: I0909 00:18:14.666170 2756 state_mem.go:75] "Updated machine memory state" Sep 9 00:18:14.676015 kubelet[2756]: I0909 00:18:14.675976 2756 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:18:14.677347 kubelet[2756]: I0909 00:18:14.677327 2756 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:18:14.677499 kubelet[2756]: I0909 00:18:14.677449 2756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:18:14.678474 kubelet[2756]: I0909 00:18:14.678268 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:18:14.792111 kubelet[2756]: I0909 00:18:14.791967 2756 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:18:14.805554 kubelet[2756]: I0909 00:18:14.805473 2756 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 00:18:14.805774 kubelet[2756]: I0909 00:18:14.805605 2756 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:18:14.896419 kubelet[2756]: I0909 00:18:14.896343 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9711c8a15bce595a207576b2de799b9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9711c8a15bce595a207576b2de799b9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:18:14.896419 kubelet[2756]: I0909 00:18:14.896419 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9711c8a15bce595a207576b2de799b9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b9711c8a15bce595a207576b2de799b9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:18:14.896650 kubelet[2756]: I0909 00:18:14.896448 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:18:14.896650 kubelet[2756]: I0909 00:18:14.896474 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:18:14.896650 kubelet[2756]: I0909 00:18:14.896495 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:18:14.896650 kubelet[2756]: I0909 00:18:14.896513 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9711c8a15bce595a207576b2de799b9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9711c8a15bce595a207576b2de799b9\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:18:14.896760 kubelet[2756]: I0909 00:18:14.896579 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:18:14.896760 kubelet[2756]: I0909 00:18:14.896702 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:18:14.896760 kubelet[2756]: I0909 00:18:14.896732 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:18:15.034636 kubelet[2756]: E0909 00:18:15.034581 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:15.036817 kubelet[2756]: E0909 00:18:15.036704 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:15.036817 kubelet[2756]: E0909 00:18:15.036704 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:15.293876 sudo[2778]: pam_unix(sudo:session): session closed for user root Sep 9 00:18:15.580599 kubelet[2756]: I0909 00:18:15.580277 2756 apiserver.go:52] "Watching apiserver" Sep 9 00:18:15.597339 kubelet[2756]: I0909 00:18:15.597239 2756 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:18:15.659665 kubelet[2756]: E0909 00:18:15.657153 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:15.661421 kubelet[2756]: E0909 00:18:15.660393 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:15.684261 kubelet[2756]: E0909 00:18:15.684188 2756 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:18:15.684466 kubelet[2756]: E0909 00:18:15.684442 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:15.874595 kubelet[2756]: I0909 00:18:15.874017 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8739809539999999 podStartE2EDuration="1.873980954s" podCreationTimestamp="2025-09-09 00:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:18:15.781118506 +0000 UTC m=+1.298279782" watchObservedRunningTime="2025-09-09 00:18:15.873980954 +0000 UTC m=+1.391142210" Sep 9 00:18:15.874595 kubelet[2756]: I0909 00:18:15.874268 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.874260802 podStartE2EDuration="1.874260802s" podCreationTimestamp="2025-09-09 00:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:18:15.873334803 +0000 UTC m=+1.390496059" watchObservedRunningTime="2025-09-09 00:18:15.874260802 +0000 UTC m=+1.391422058" Sep 9 00:18:16.296260 kubelet[2756]: I0909 00:18:16.296012 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.295907932 podStartE2EDuration="2.295907932s" podCreationTimestamp="2025-09-09 00:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:18:16.042403283 +0000 UTC m=+1.559564559" watchObservedRunningTime="2025-09-09 00:18:16.295907932 +0000 UTC m=+1.813069188" Sep 9 00:18:16.656302 kubelet[2756]: E0909 00:18:16.656257 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:16.841440 kubelet[2756]: E0909 00:18:16.840914 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:17.541116 sudo[1797]: pam_unix(sudo:session): session closed for user root Sep 9 00:18:17.542809 sshd[1796]: Connection closed by 10.0.0.1 port 45980 Sep 9 00:18:17.562374 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:17.581142 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:45980.service: Deactivated successfully. Sep 9 00:18:17.584090 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:18:17.584377 systemd[1]: session-9.scope: Consumed 5.428s CPU time, 263.3M memory peak. Sep 9 00:18:17.587173 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:18:17.589419 systemd-logind[1565]: Removed session 9. Sep 9 00:18:17.660652 kubelet[2756]: E0909 00:18:17.660591 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:17.754781 kubelet[2756]: I0909 00:18:17.754725 2756 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:18:17.755337 containerd[1580]: time="2025-09-09T00:18:17.755290443Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:18:17.756017 kubelet[2756]: I0909 00:18:17.755563 2756 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:18:18.403943 kubelet[2756]: E0909 00:18:18.403894 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:18.661799 kubelet[2756]: E0909 00:18:18.661661 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:18.729453 kubelet[2756]: I0909 00:18:18.729405 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43367385-f421-47ee-a75e-6cfbab1acfa2-lib-modules\") pod \"kube-proxy-92mnz\" (UID: \"43367385-f421-47ee-a75e-6cfbab1acfa2\") " pod="kube-system/kube-proxy-92mnz" Sep 9 00:18:18.729453 kubelet[2756]: I0909 00:18:18.729444 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43367385-f421-47ee-a75e-6cfbab1acfa2-kube-proxy\") pod \"kube-proxy-92mnz\" (UID: \"43367385-f421-47ee-a75e-6cfbab1acfa2\") " pod="kube-system/kube-proxy-92mnz" Sep 9 00:18:18.729704 kubelet[2756]: I0909 00:18:18.729474 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43367385-f421-47ee-a75e-6cfbab1acfa2-xtables-lock\") pod \"kube-proxy-92mnz\" (UID: \"43367385-f421-47ee-a75e-6cfbab1acfa2\") " pod="kube-system/kube-proxy-92mnz" Sep 9 00:18:18.729704 kubelet[2756]: I0909 00:18:18.729491 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgvx\" (UniqueName: \"kubernetes.io/projected/43367385-f421-47ee-a75e-6cfbab1acfa2-kube-api-access-4lgvx\") pod \"kube-proxy-92mnz\" (UID: \"43367385-f421-47ee-a75e-6cfbab1acfa2\") " pod="kube-system/kube-proxy-92mnz" Sep 9 00:18:18.741113 systemd[1]: Created slice kubepods-besteffort-pod43367385_f421_47ee_a75e_6cfbab1acfa2.slice - libcontainer container kubepods-besteffort-pod43367385_f421_47ee_a75e_6cfbab1acfa2.slice. Sep 9 00:18:18.751901 systemd[1]: Created slice kubepods-burstable-pod62d16909_5b45_464b_ab31_6c23beca80d3.slice - libcontainer container kubepods-burstable-pod62d16909_5b45_464b_ab31_6c23beca80d3.slice. Sep 9 00:18:18.829850 kubelet[2756]: I0909 00:18:18.829764 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-lib-modules\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.829850 kubelet[2756]: I0909 00:18:18.829828 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-xtables-lock\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830102 kubelet[2756]: I0909 00:18:18.829870 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-hostproc\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830102 kubelet[2756]: I0909 00:18:18.829892 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-host-proc-sys-net\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830102 kubelet[2756]: I0909 00:18:18.829908 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-bpf-maps\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830102 kubelet[2756]: I0909 00:18:18.829963 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-run\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830102 kubelet[2756]: I0909 00:18:18.829981 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-etc-cni-netd\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830102 kubelet[2756]: I0909 00:18:18.829995 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62d16909-5b45-464b-ab31-6c23beca80d3-clustermesh-secrets\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830266 kubelet[2756]: I0909 00:18:18.830010 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62d16909-5b45-464b-ab31-6c23beca80d3-hubble-tls\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830266 kubelet[2756]: I0909 00:18:18.830080 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trgfh\" (UniqueName: \"kubernetes.io/projected/62d16909-5b45-464b-ab31-6c23beca80d3-kube-api-access-trgfh\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830266 kubelet[2756]: I0909 00:18:18.830100 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-cgroup\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830266 kubelet[2756]: I0909 00:18:18.830122 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cni-path\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830266 kubelet[2756]: I0909 00:18:18.830138 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-config-path\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:18.830266 kubelet[2756]: I0909 00:18:18.830197 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-host-proc-sys-kernel\") pod \"cilium-f4jb2\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " pod="kube-system/cilium-f4jb2" Sep 9 00:18:19.344731 systemd[1]: Created slice kubepods-besteffort-podf40fe4b1_c833_48b1_a4bf_c80e09aef469.slice - libcontainer container kubepods-besteffort-podf40fe4b1_c833_48b1_a4bf_c80e09aef469.slice. Sep 9 00:18:19.352696 kubelet[2756]: E0909 00:18:19.352647 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:19.353392 containerd[1580]: time="2025-09-09T00:18:19.353351567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92mnz,Uid:43367385-f421-47ee-a75e-6cfbab1acfa2,Namespace:kube-system,Attempt:0,}" Sep 9 00:18:19.357374 kubelet[2756]: E0909 00:18:19.357335 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:19.357921 containerd[1580]: time="2025-09-09T00:18:19.357881828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f4jb2,Uid:62d16909-5b45-464b-ab31-6c23beca80d3,Namespace:kube-system,Attempt:0,}" Sep 9 00:18:19.433405 kubelet[2756]: I0909 00:18:19.433329 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40fe4b1-c833-48b1-a4bf-c80e09aef469-cilium-config-path\") pod \"cilium-operator-5d85765b45-v7n2n\" (UID: \"f40fe4b1-c833-48b1-a4bf-c80e09aef469\") " pod="kube-system/cilium-operator-5d85765b45-v7n2n" Sep 9 00:18:19.433405 kubelet[2756]: I0909 00:18:19.433385 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8w88\" (UniqueName: \"kubernetes.io/projected/f40fe4b1-c833-48b1-a4bf-c80e09aef469-kube-api-access-h8w88\") pod \"cilium-operator-5d85765b45-v7n2n\" (UID: \"f40fe4b1-c833-48b1-a4bf-c80e09aef469\") " pod="kube-system/cilium-operator-5d85765b45-v7n2n" Sep 9 00:18:20.253171 kubelet[2756]: E0909 00:18:20.253124 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:20.253856 containerd[1580]: time="2025-09-09T00:18:20.253814782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v7n2n,Uid:f40fe4b1-c833-48b1-a4bf-c80e09aef469,Namespace:kube-system,Attempt:0,}" Sep 9 00:18:20.315521 containerd[1580]: time="2025-09-09T00:18:20.315426792Z" level=info msg="connecting to shim 77242de83faaee679dcc0cdf239d2c6b77b073b5cd36155f2ef417315b6fc6a1" address="unix:///run/containerd/s/58c5ce39d48d25e76b50b2fbc7b4c99d52419ddce2120753b931ca7c9a234832" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:18:20.332269 containerd[1580]: time="2025-09-09T00:18:20.332197934Z" level=info msg="connecting to shim 84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac" address="unix:///run/containerd/s/0332b922ac4cbecc4d034b99aa40ca2ef1b84df8f17b9d3b700d13e062dd2173" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:18:20.377410 systemd[1]: Started cri-containerd-77242de83faaee679dcc0cdf239d2c6b77b073b5cd36155f2ef417315b6fc6a1.scope - libcontainer container 77242de83faaee679dcc0cdf239d2c6b77b073b5cd36155f2ef417315b6fc6a1. Sep 9 00:18:20.379205 systemd[1]: Started cri-containerd-84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac.scope - libcontainer container 84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac. Sep 9 00:18:20.466872 containerd[1580]: time="2025-09-09T00:18:20.466788841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92mnz,Uid:43367385-f421-47ee-a75e-6cfbab1acfa2,Namespace:kube-system,Attempt:0,} returns sandbox id \"77242de83faaee679dcc0cdf239d2c6b77b073b5cd36155f2ef417315b6fc6a1\"" Sep 9 00:18:20.467977 kubelet[2756]: E0909 00:18:20.467922 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:20.470874 containerd[1580]: time="2025-09-09T00:18:20.470821954Z" level=info msg="CreateContainer within sandbox \"77242de83faaee679dcc0cdf239d2c6b77b073b5cd36155f2ef417315b6fc6a1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:18:20.472193 containerd[1580]: time="2025-09-09T00:18:20.472129408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f4jb2,Uid:62d16909-5b45-464b-ab31-6c23beca80d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\"" Sep 9 00:18:20.473219 kubelet[2756]: E0909 00:18:20.473175 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:20.474822 containerd[1580]: time="2025-09-09T00:18:20.474599102Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:18:20.486237 containerd[1580]: time="2025-09-09T00:18:20.486182836Z" level=info msg="connecting to shim 0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5" address="unix:///run/containerd/s/f413227ec822d8bf6032ac083b229f629b4fb42bb1c6ba82f7675356d6133faf" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:18:20.486491 containerd[1580]: time="2025-09-09T00:18:20.486433900Z" level=info msg="Container eaace54d97e9c284dac7e6e30fdb8bf1cce53eeb24bdbfb88f9e588a821f4b56: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:20.530281 systemd[1]: Started cri-containerd-0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5.scope - libcontainer container 0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5. Sep 9 00:18:20.637416 containerd[1580]: time="2025-09-09T00:18:20.637366709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-v7n2n,Uid:f40fe4b1-c833-48b1-a4bf-c80e09aef469,Namespace:kube-system,Attempt:0,} returns sandbox id \"0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5\"" Sep 9 00:18:20.638153 kubelet[2756]: E0909 00:18:20.638129 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:20.654371 containerd[1580]: time="2025-09-09T00:18:20.654303022Z" level=info msg="CreateContainer within sandbox \"77242de83faaee679dcc0cdf239d2c6b77b073b5cd36155f2ef417315b6fc6a1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eaace54d97e9c284dac7e6e30fdb8bf1cce53eeb24bdbfb88f9e588a821f4b56\"" Sep 9 00:18:20.655989 containerd[1580]: time="2025-09-09T00:18:20.655100554Z" level=info msg="StartContainer for \"eaace54d97e9c284dac7e6e30fdb8bf1cce53eeb24bdbfb88f9e588a821f4b56\"" Sep 9 00:18:20.656741 containerd[1580]: time="2025-09-09T00:18:20.656661006Z" level=info msg="connecting to shim eaace54d97e9c284dac7e6e30fdb8bf1cce53eeb24bdbfb88f9e588a821f4b56" address="unix:///run/containerd/s/58c5ce39d48d25e76b50b2fbc7b4c99d52419ddce2120753b931ca7c9a234832" protocol=ttrpc version=3 Sep 9 00:18:20.683258 systemd[1]: Started cri-containerd-eaace54d97e9c284dac7e6e30fdb8bf1cce53eeb24bdbfb88f9e588a821f4b56.scope - libcontainer container eaace54d97e9c284dac7e6e30fdb8bf1cce53eeb24bdbfb88f9e588a821f4b56. Sep 9 00:18:20.733145 containerd[1580]: time="2025-09-09T00:18:20.733085807Z" level=info msg="StartContainer for \"eaace54d97e9c284dac7e6e30fdb8bf1cce53eeb24bdbfb88f9e588a821f4b56\" returns successfully" Sep 9 00:18:21.672575 kubelet[2756]: E0909 00:18:21.672348 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:21.853302 kubelet[2756]: I0909 00:18:21.853028 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-92mnz" podStartSLOduration=3.853005739 podStartE2EDuration="3.853005739s" podCreationTimestamp="2025-09-09 00:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:18:21.852902644 +0000 UTC m=+7.370063930" watchObservedRunningTime="2025-09-09 00:18:21.853005739 +0000 UTC m=+7.370166995" Sep 9 00:18:22.673241 kubelet[2756]: E0909 00:18:22.673210 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:23.031186 kubelet[2756]: E0909 00:18:23.031027 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:23.674929 kubelet[2756]: E0909 00:18:23.674889 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:28.996963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315248488.mount: Deactivated successfully. Sep 9 00:18:34.303774 containerd[1580]: time="2025-09-09T00:18:34.303675498Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:18:34.304620 containerd[1580]: time="2025-09-09T00:18:34.304515777Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 00:18:34.306402 containerd[1580]: time="2025-09-09T00:18:34.306340627Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:18:34.308482 containerd[1580]: time="2025-09-09T00:18:34.308431237Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.833787551s" Sep 9 00:18:34.308482 containerd[1580]: time="2025-09-09T00:18:34.308475269Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 00:18:34.322657 containerd[1580]: time="2025-09-09T00:18:34.322580889Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:18:34.331801 containerd[1580]: time="2025-09-09T00:18:34.331699697Z" level=info msg="CreateContainer within sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:18:34.341853 containerd[1580]: time="2025-09-09T00:18:34.341768401Z" level=info msg="Container 90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:34.348982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065644279.mount: Deactivated successfully. Sep 9 00:18:34.351816 containerd[1580]: time="2025-09-09T00:18:34.351752956Z" level=info msg="CreateContainer within sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\"" Sep 9 00:18:34.352470 containerd[1580]: time="2025-09-09T00:18:34.352425690Z" level=info msg="StartContainer for \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\"" Sep 9 00:18:34.353382 containerd[1580]: time="2025-09-09T00:18:34.353353374Z" level=info msg="connecting to shim 90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798" address="unix:///run/containerd/s/0332b922ac4cbecc4d034b99aa40ca2ef1b84df8f17b9d3b700d13e062dd2173" protocol=ttrpc version=3 Sep 9 00:18:34.421295 systemd[1]: Started cri-containerd-90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798.scope - libcontainer container 90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798. Sep 9 00:18:34.464891 containerd[1580]: time="2025-09-09T00:18:34.464824666Z" level=info msg="StartContainer for \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\" returns successfully" Sep 9 00:18:34.480951 systemd[1]: cri-containerd-90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798.scope: Deactivated successfully. Sep 9 00:18:34.485286 containerd[1580]: time="2025-09-09T00:18:34.485231900Z" level=info msg="received exit event container_id:\"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\" id:\"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\" pid:3181 exited_at:{seconds:1757377114 nanos:484587429}" Sep 9 00:18:34.485487 containerd[1580]: time="2025-09-09T00:18:34.485371893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\" id:\"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\" pid:3181 exited_at:{seconds:1757377114 nanos:484587429}" Sep 9 00:18:34.516911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798-rootfs.mount: Deactivated successfully. Sep 9 00:18:34.703010 kubelet[2756]: E0909 00:18:34.702965 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:35.707167 kubelet[2756]: E0909 00:18:35.707105 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:35.711067 containerd[1580]: time="2025-09-09T00:18:35.710435765Z" level=info msg="CreateContainer within sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:18:35.729226 containerd[1580]: time="2025-09-09T00:18:35.727805505Z" level=info msg="Container 30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:35.738765 containerd[1580]: time="2025-09-09T00:18:35.738695660Z" level=info msg="CreateContainer within sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\"" Sep 9 00:18:35.739460 containerd[1580]: time="2025-09-09T00:18:35.739422356Z" level=info msg="StartContainer for \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\"" Sep 9 00:18:35.740515 containerd[1580]: time="2025-09-09T00:18:35.740484673Z" level=info msg="connecting to shim 30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411" address="unix:///run/containerd/s/0332b922ac4cbecc4d034b99aa40ca2ef1b84df8f17b9d3b700d13e062dd2173" protocol=ttrpc version=3 Sep 9 00:18:35.775386 systemd[1]: Started cri-containerd-30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411.scope - libcontainer container 30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411. Sep 9 00:18:35.812924 containerd[1580]: time="2025-09-09T00:18:35.812863967Z" level=info msg="StartContainer for \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\" returns successfully" Sep 9 00:18:35.829316 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:18:35.829955 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:18:35.830478 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:18:35.832340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:18:35.834979 containerd[1580]: time="2025-09-09T00:18:35.834923572Z" level=info msg="received exit event container_id:\"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\" id:\"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\" pid:3225 exited_at:{seconds:1757377115 nanos:834637143}" Sep 9 00:18:35.834987 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:18:35.835454 containerd[1580]: time="2025-09-09T00:18:35.835383315Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\" id:\"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\" pid:3225 exited_at:{seconds:1757377115 nanos:834637143}" Sep 9 00:18:35.835598 systemd[1]: cri-containerd-30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411.scope: Deactivated successfully. Sep 9 00:18:35.877651 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:18:36.654964 containerd[1580]: time="2025-09-09T00:18:36.654901506Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:18:36.655748 containerd[1580]: time="2025-09-09T00:18:36.655694686Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 00:18:36.657004 containerd[1580]: time="2025-09-09T00:18:36.656966065Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:18:36.658256 containerd[1580]: time="2025-09-09T00:18:36.658206005Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.335566806s" Sep 9 00:18:36.658256 containerd[1580]: time="2025-09-09T00:18:36.658241251Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 00:18:36.660361 containerd[1580]: time="2025-09-09T00:18:36.660319867Z" level=info msg="CreateContainer within sandbox \"0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:18:36.672676 containerd[1580]: time="2025-09-09T00:18:36.672604239Z" level=info msg="Container 5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:36.680070 containerd[1580]: time="2025-09-09T00:18:36.680003902Z" level=info msg="CreateContainer within sandbox \"0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\"" Sep 9 00:18:36.680766 containerd[1580]: time="2025-09-09T00:18:36.680713186Z" level=info msg="StartContainer for \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\"" Sep 9 00:18:36.681917 containerd[1580]: time="2025-09-09T00:18:36.681884387Z" level=info msg="connecting to shim 5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920" address="unix:///run/containerd/s/f413227ec822d8bf6032ac083b229f629b4fb42bb1c6ba82f7675356d6133faf" protocol=ttrpc version=3 Sep 9 00:18:36.712392 kubelet[2756]: E0909 00:18:36.712339 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:36.714317 containerd[1580]: time="2025-09-09T00:18:36.714264237Z" level=info msg="CreateContainer within sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:18:36.716334 systemd[1]: Started cri-containerd-5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920.scope - libcontainer container 5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920. Sep 9 00:18:36.728841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411-rootfs.mount: Deactivated successfully. Sep 9 00:18:36.743806 containerd[1580]: time="2025-09-09T00:18:36.742415923Z" level=info msg="Container b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:36.745602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773925307.mount: Deactivated successfully. Sep 9 00:18:36.758346 containerd[1580]: time="2025-09-09T00:18:36.757839404Z" level=info msg="CreateContainer within sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\"" Sep 9 00:18:36.759297 containerd[1580]: time="2025-09-09T00:18:36.758543216Z" level=info msg="StartContainer for \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\"" Sep 9 00:18:36.761436 containerd[1580]: time="2025-09-09T00:18:36.761395966Z" level=info msg="connecting to shim b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08" address="unix:///run/containerd/s/0332b922ac4cbecc4d034b99aa40ca2ef1b84df8f17b9d3b700d13e062dd2173" protocol=ttrpc version=3 Sep 9 00:18:36.782055 containerd[1580]: time="2025-09-09T00:18:36.781648761Z" level=info msg="StartContainer for \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" returns successfully" Sep 9 00:18:36.791265 systemd[1]: Started cri-containerd-b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08.scope - libcontainer container b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08. Sep 9 00:18:36.843893 systemd[1]: cri-containerd-b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08.scope: Deactivated successfully. Sep 9 00:18:36.847555 containerd[1580]: time="2025-09-09T00:18:36.847465750Z" level=info msg="StartContainer for \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\" returns successfully" Sep 9 00:18:36.848612 containerd[1580]: time="2025-09-09T00:18:36.848583459Z" level=info msg="received exit event container_id:\"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\" id:\"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\" pid:3324 exited_at:{seconds:1757377116 nanos:848266153}" Sep 9 00:18:36.848874 containerd[1580]: time="2025-09-09T00:18:36.848843188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\" id:\"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\" pid:3324 exited_at:{seconds:1757377116 nanos:848266153}" Sep 9 00:18:36.880360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08-rootfs.mount: Deactivated successfully. Sep 9 00:18:37.714974 kubelet[2756]: E0909 00:18:37.714919 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:37.739001 kubelet[2756]: E0909 00:18:37.738731 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:37.912073 kubelet[2756]: I0909 00:18:37.910922 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-v7n2n" podStartSLOduration=2.890476312 podStartE2EDuration="18.910900883s" podCreationTimestamp="2025-09-09 00:18:19 +0000 UTC" firstStartedPulling="2025-09-09 00:18:20.638547193 +0000 UTC m=+6.155708449" lastFinishedPulling="2025-09-09 00:18:36.658971764 +0000 UTC m=+22.176133020" observedRunningTime="2025-09-09 00:18:37.910543131 +0000 UTC m=+23.427704397" watchObservedRunningTime="2025-09-09 00:18:37.910900883 +0000 UTC m=+23.428062139" Sep 9 00:18:38.723336 kubelet[2756]: E0909 00:18:38.723246 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:38.723336 kubelet[2756]: E0909 00:18:38.723276 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:38.725200 containerd[1580]: time="2025-09-09T00:18:38.725144784Z" level=info msg="CreateContainer within sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:18:38.740952 containerd[1580]: time="2025-09-09T00:18:38.740899761Z" level=info msg="Container 4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:38.749932 containerd[1580]: time="2025-09-09T00:18:38.749869952Z" level=info msg="CreateContainer within sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\"" Sep 9 00:18:38.750682 containerd[1580]: time="2025-09-09T00:18:38.750632746Z" level=info msg="StartContainer for \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\"" Sep 9 00:18:38.751679 containerd[1580]: time="2025-09-09T00:18:38.751648403Z" level=info msg="connecting to shim 4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da" address="unix:///run/containerd/s/0332b922ac4cbecc4d034b99aa40ca2ef1b84df8f17b9d3b700d13e062dd2173" protocol=ttrpc version=3 Sep 9 00:18:38.778212 systemd[1]: Started cri-containerd-4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da.scope - libcontainer container 4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da. Sep 9 00:18:38.807988 systemd[1]: cri-containerd-4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da.scope: Deactivated successfully. Sep 9 00:18:38.808606 containerd[1580]: time="2025-09-09T00:18:38.808550288Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\" id:\"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\" pid:3364 exited_at:{seconds:1757377118 nanos:808251827}" Sep 9 00:18:38.873998 containerd[1580]: time="2025-09-09T00:18:38.873801678Z" level=info msg="received exit event container_id:\"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\" id:\"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\" pid:3364 exited_at:{seconds:1757377118 nanos:808251827}" Sep 9 00:18:38.875753 containerd[1580]: time="2025-09-09T00:18:38.875706747Z" level=info msg="StartContainer for \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\" returns successfully" Sep 9 00:18:38.900647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da-rootfs.mount: Deactivated successfully. Sep 9 00:18:39.728594 kubelet[2756]: E0909 00:18:39.728552 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:39.730356 containerd[1580]: time="2025-09-09T00:18:39.730313914Z" level=info msg="CreateContainer within sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:18:39.747111 containerd[1580]: time="2025-09-09T00:18:39.747009325Z" level=info msg="Container 9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:39.758133 containerd[1580]: time="2025-09-09T00:18:39.758034405Z" level=info msg="CreateContainer within sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\"" Sep 9 00:18:39.760075 containerd[1580]: time="2025-09-09T00:18:39.758772431Z" level=info msg="StartContainer for \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\"" Sep 9 00:18:39.760293 containerd[1580]: time="2025-09-09T00:18:39.760258022Z" level=info msg="connecting to shim 9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436" address="unix:///run/containerd/s/0332b922ac4cbecc4d034b99aa40ca2ef1b84df8f17b9d3b700d13e062dd2173" protocol=ttrpc version=3 Sep 9 00:18:39.781314 systemd[1]: Started cri-containerd-9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436.scope - libcontainer container 9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436. Sep 9 00:18:39.971335 containerd[1580]: time="2025-09-09T00:18:39.971271369Z" level=info msg="StartContainer for \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" returns successfully" Sep 9 00:18:40.040021 containerd[1580]: time="2025-09-09T00:18:40.039887502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" id:\"14992e86f93ce691bfa452667db209f2be8448751e0292ae6092e65894e63df4\" pid:3437 exited_at:{seconds:1757377120 nanos:39550058}" Sep 9 00:18:40.103370 kubelet[2756]: I0909 00:18:40.103151 2756 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 00:18:40.143292 systemd[1]: Created slice kubepods-burstable-pode2e7bcdd_6c07_4d18_825f_d6f73e7f4503.slice - libcontainer container kubepods-burstable-pode2e7bcdd_6c07_4d18_825f_d6f73e7f4503.slice. Sep 9 00:18:40.153029 systemd[1]: Created slice kubepods-burstable-pode96140ce_7f07_4c7c_9e55_a5a950730782.slice - libcontainer container kubepods-burstable-pode96140ce_7f07_4c7c_9e55_a5a950730782.slice. Sep 9 00:18:40.283278 kubelet[2756]: I0909 00:18:40.283184 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd8sc\" (UniqueName: \"kubernetes.io/projected/e96140ce-7f07-4c7c-9e55-a5a950730782-kube-api-access-zd8sc\") pod \"coredns-7c65d6cfc9-66hsx\" (UID: \"e96140ce-7f07-4c7c-9e55-a5a950730782\") " pod="kube-system/coredns-7c65d6cfc9-66hsx" Sep 9 00:18:40.283568 kubelet[2756]: I0909 00:18:40.283291 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2e7bcdd-6c07-4d18-825f-d6f73e7f4503-config-volume\") pod \"coredns-7c65d6cfc9-9hjlm\" (UID: \"e2e7bcdd-6c07-4d18-825f-d6f73e7f4503\") " pod="kube-system/coredns-7c65d6cfc9-9hjlm" Sep 9 00:18:40.283568 kubelet[2756]: I0909 00:18:40.283321 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtxnt\" (UniqueName: \"kubernetes.io/projected/e2e7bcdd-6c07-4d18-825f-d6f73e7f4503-kube-api-access-rtxnt\") pod \"coredns-7c65d6cfc9-9hjlm\" (UID: \"e2e7bcdd-6c07-4d18-825f-d6f73e7f4503\") " pod="kube-system/coredns-7c65d6cfc9-9hjlm" Sep 9 00:18:40.283568 kubelet[2756]: I0909 00:18:40.283368 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e96140ce-7f07-4c7c-9e55-a5a950730782-config-volume\") pod \"coredns-7c65d6cfc9-66hsx\" (UID: \"e96140ce-7f07-4c7c-9e55-a5a950730782\") " pod="kube-system/coredns-7c65d6cfc9-66hsx" Sep 9 00:18:40.736873 kubelet[2756]: E0909 00:18:40.736822 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:40.748285 kubelet[2756]: E0909 00:18:40.748204 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:40.757378 kubelet[2756]: E0909 00:18:40.757319 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:40.757885 containerd[1580]: time="2025-09-09T00:18:40.757817062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9hjlm,Uid:e2e7bcdd-6c07-4d18-825f-d6f73e7f4503,Namespace:kube-system,Attempt:0,}" Sep 9 00:18:40.758378 containerd[1580]: time="2025-09-09T00:18:40.758127214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-66hsx,Uid:e96140ce-7f07-4c7c-9e55-a5a950730782,Namespace:kube-system,Attempt:0,}" Sep 9 00:18:41.738737 kubelet[2756]: E0909 00:18:41.738685 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:42.276649 systemd-networkd[1439]: cilium_host: Link UP Sep 9 00:18:42.277790 systemd-networkd[1439]: cilium_net: Link UP Sep 9 00:18:42.278020 systemd-networkd[1439]: cilium_net: Gained carrier Sep 9 00:18:42.280169 systemd-networkd[1439]: cilium_host: Gained carrier Sep 9 00:18:42.404938 systemd-networkd[1439]: cilium_vxlan: Link UP Sep 9 00:18:42.405214 systemd-networkd[1439]: cilium_vxlan: Gained carrier Sep 9 00:18:42.513321 systemd-networkd[1439]: cilium_net: Gained IPv6LL Sep 9 00:18:42.668084 kernel: NET: Registered PF_ALG protocol family Sep 9 00:18:42.739872 kubelet[2756]: E0909 00:18:42.739837 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:43.002229 systemd-networkd[1439]: cilium_host: Gained IPv6LL Sep 9 00:18:43.425983 systemd-networkd[1439]: lxc_health: Link UP Sep 9 00:18:43.437845 systemd-networkd[1439]: lxc_health: Gained carrier Sep 9 00:18:43.771755 systemd-networkd[1439]: cilium_vxlan: Gained IPv6LL Sep 9 00:18:43.898150 kernel: eth0: renamed from tmp0ae61 Sep 9 00:18:43.925487 kernel: eth0: renamed from tmp0ef4b Sep 9 00:18:43.950859 systemd-networkd[1439]: lxc37553af35fb3: Link UP Sep 9 00:18:43.963173 systemd-networkd[1439]: lxcd7e454436010: Link UP Sep 9 00:18:43.967741 systemd-networkd[1439]: lxc37553af35fb3: Gained carrier Sep 9 00:18:43.968326 systemd-networkd[1439]: lxcd7e454436010: Gained carrier Sep 9 00:18:45.307306 systemd-networkd[1439]: lxc_health: Gained IPv6LL Sep 9 00:18:45.364824 kubelet[2756]: E0909 00:18:45.364762 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:45.370224 systemd-networkd[1439]: lxcd7e454436010: Gained IPv6LL Sep 9 00:18:45.383629 kubelet[2756]: I0909 00:18:45.383545 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f4jb2" podStartSLOduration=13.544996334 podStartE2EDuration="27.383521301s" podCreationTimestamp="2025-09-09 00:18:18 +0000 UTC" firstStartedPulling="2025-09-09 00:18:20.473937225 +0000 UTC m=+5.991098481" lastFinishedPulling="2025-09-09 00:18:34.312462192 +0000 UTC m=+19.829623448" observedRunningTime="2025-09-09 00:18:40.752020866 +0000 UTC m=+26.269182132" watchObservedRunningTime="2025-09-09 00:18:45.383521301 +0000 UTC m=+30.900682557" Sep 9 00:18:45.561312 systemd-networkd[1439]: lxc37553af35fb3: Gained IPv6LL Sep 9 00:18:45.749517 kubelet[2756]: E0909 00:18:45.749417 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:46.721382 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:45846.service - OpenSSH per-connection server daemon (10.0.0.1:45846). Sep 9 00:18:46.752287 kubelet[2756]: E0909 00:18:46.752233 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:46.782180 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 45846 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:18:46.784475 sshd-session[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:46.790706 systemd-logind[1565]: New session 10 of user core. Sep 9 00:18:46.799439 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:18:46.950773 sshd[3907]: Connection closed by 10.0.0.1 port 45846 Sep 9 00:18:46.951193 sshd-session[3905]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:46.956511 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:45846.service: Deactivated successfully. Sep 9 00:18:46.958750 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:18:46.959641 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:18:46.961822 systemd-logind[1565]: Removed session 10. Sep 9 00:18:47.902963 containerd[1580]: time="2025-09-09T00:18:47.902889499Z" level=info msg="connecting to shim 0ae61072355550d679c01c5639d877b4d107fb5dc880dfe4e5e1a75cee96a972" address="unix:///run/containerd/s/b39e16468e1625208cee8f9023519d979226820765a796d5ca2d89186c9f1e71" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:18:47.903455 containerd[1580]: time="2025-09-09T00:18:47.903295251Z" level=info msg="connecting to shim 0ef4b50a0bbc35ab3a9c3ef292b426b21b02efe17bb083b50b628bbd7ee7b254" address="unix:///run/containerd/s/c378d393ee35d951c01c3b6c7973141831d7dc558b81ac754b7913b8561170ed" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:18:47.935290 systemd[1]: Started cri-containerd-0ae61072355550d679c01c5639d877b4d107fb5dc880dfe4e5e1a75cee96a972.scope - libcontainer container 0ae61072355550d679c01c5639d877b4d107fb5dc880dfe4e5e1a75cee96a972. Sep 9 00:18:47.940479 systemd[1]: Started cri-containerd-0ef4b50a0bbc35ab3a9c3ef292b426b21b02efe17bb083b50b628bbd7ee7b254.scope - libcontainer container 0ef4b50a0bbc35ab3a9c3ef292b426b21b02efe17bb083b50b628bbd7ee7b254. Sep 9 00:18:47.955470 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:18:47.959629 systemd-resolved[1477]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:18:47.991997 containerd[1580]: time="2025-09-09T00:18:47.991910439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-66hsx,Uid:e96140ce-7f07-4c7c-9e55-a5a950730782,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ae61072355550d679c01c5639d877b4d107fb5dc880dfe4e5e1a75cee96a972\"" Sep 9 00:18:47.997827 kubelet[2756]: E0909 00:18:47.997794 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:48.000670 containerd[1580]: time="2025-09-09T00:18:48.000622895Z" level=info msg="CreateContainer within sandbox \"0ae61072355550d679c01c5639d877b4d107fb5dc880dfe4e5e1a75cee96a972\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:18:48.003724 containerd[1580]: time="2025-09-09T00:18:48.003682199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9hjlm,Uid:e2e7bcdd-6c07-4d18-825f-d6f73e7f4503,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ef4b50a0bbc35ab3a9c3ef292b426b21b02efe17bb083b50b628bbd7ee7b254\"" Sep 9 00:18:48.004522 kubelet[2756]: E0909 00:18:48.004497 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:48.006875 containerd[1580]: time="2025-09-09T00:18:48.006290665Z" level=info msg="CreateContainer within sandbox \"0ef4b50a0bbc35ab3a9c3ef292b426b21b02efe17bb083b50b628bbd7ee7b254\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:18:48.024796 containerd[1580]: time="2025-09-09T00:18:48.024738696Z" level=info msg="Container ec071efbe6ff07ad5ad5999d03c5c2ae626e60b7252b131c6ddbcfaa4a32cf5a: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:48.027061 containerd[1580]: time="2025-09-09T00:18:48.026819593Z" level=info msg="Container 78357584d53af880e67e002e6c95b46860058fbc94d520ccc2fb4a7469145ca8: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:18:48.027017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929321439.mount: Deactivated successfully. Sep 9 00:18:48.049369 containerd[1580]: time="2025-09-09T00:18:48.049308229Z" level=info msg="CreateContainer within sandbox \"0ef4b50a0bbc35ab3a9c3ef292b426b21b02efe17bb083b50b628bbd7ee7b254\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"78357584d53af880e67e002e6c95b46860058fbc94d520ccc2fb4a7469145ca8\"" Sep 9 00:18:48.050946 containerd[1580]: time="2025-09-09T00:18:48.049889370Z" level=info msg="StartContainer for \"78357584d53af880e67e002e6c95b46860058fbc94d520ccc2fb4a7469145ca8\"" Sep 9 00:18:48.050946 containerd[1580]: time="2025-09-09T00:18:48.050360464Z" level=info msg="CreateContainer within sandbox \"0ae61072355550d679c01c5639d877b4d107fb5dc880dfe4e5e1a75cee96a972\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec071efbe6ff07ad5ad5999d03c5c2ae626e60b7252b131c6ddbcfaa4a32cf5a\"" Sep 9 00:18:48.050946 containerd[1580]: time="2025-09-09T00:18:48.050698398Z" level=info msg="StartContainer for \"ec071efbe6ff07ad5ad5999d03c5c2ae626e60b7252b131c6ddbcfaa4a32cf5a\"" Sep 9 00:18:48.051715 containerd[1580]: time="2025-09-09T00:18:48.051667076Z" level=info msg="connecting to shim ec071efbe6ff07ad5ad5999d03c5c2ae626e60b7252b131c6ddbcfaa4a32cf5a" address="unix:///run/containerd/s/b39e16468e1625208cee8f9023519d979226820765a796d5ca2d89186c9f1e71" protocol=ttrpc version=3 Sep 9 00:18:48.063971 containerd[1580]: time="2025-09-09T00:18:48.063890353Z" level=info msg="connecting to shim 78357584d53af880e67e002e6c95b46860058fbc94d520ccc2fb4a7469145ca8" address="unix:///run/containerd/s/c378d393ee35d951c01c3b6c7973141831d7dc558b81ac754b7913b8561170ed" protocol=ttrpc version=3 Sep 9 00:18:48.080364 systemd[1]: Started cri-containerd-ec071efbe6ff07ad5ad5999d03c5c2ae626e60b7252b131c6ddbcfaa4a32cf5a.scope - libcontainer container ec071efbe6ff07ad5ad5999d03c5c2ae626e60b7252b131c6ddbcfaa4a32cf5a. Sep 9 00:18:48.097227 systemd[1]: Started cri-containerd-78357584d53af880e67e002e6c95b46860058fbc94d520ccc2fb4a7469145ca8.scope - libcontainer container 78357584d53af880e67e002e6c95b46860058fbc94d520ccc2fb4a7469145ca8. Sep 9 00:18:48.134540 containerd[1580]: time="2025-09-09T00:18:48.134482169Z" level=info msg="StartContainer for \"ec071efbe6ff07ad5ad5999d03c5c2ae626e60b7252b131c6ddbcfaa4a32cf5a\" returns successfully" Sep 9 00:18:48.147509 containerd[1580]: time="2025-09-09T00:18:48.147461364Z" level=info msg="StartContainer for \"78357584d53af880e67e002e6c95b46860058fbc94d520ccc2fb4a7469145ca8\" returns successfully" Sep 9 00:18:48.763996 kubelet[2756]: E0909 00:18:48.763886 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:48.767078 kubelet[2756]: E0909 00:18:48.766908 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:48.775747 kubelet[2756]: I0909 00:18:48.775625 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9hjlm" podStartSLOduration=29.775606818 podStartE2EDuration="29.775606818s" podCreationTimestamp="2025-09-09 00:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:18:48.775457638 +0000 UTC m=+34.292618915" watchObservedRunningTime="2025-09-09 00:18:48.775606818 +0000 UTC m=+34.292768074" Sep 9 00:18:48.800733 kubelet[2756]: I0909 00:18:48.799992 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-66hsx" podStartSLOduration=29.799966788 podStartE2EDuration="29.799966788s" podCreationTimestamp="2025-09-09 00:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:18:48.798684701 +0000 UTC m=+34.315845977" watchObservedRunningTime="2025-09-09 00:18:48.799966788 +0000 UTC m=+34.317128044" Sep 9 00:18:49.768977 kubelet[2756]: E0909 00:18:49.768770 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:49.768977 kubelet[2756]: E0909 00:18:49.768905 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:50.770574 kubelet[2756]: E0909 00:18:50.770526 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:50.771168 kubelet[2756]: E0909 00:18:50.770731 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:51.973263 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:35928.service - OpenSSH per-connection server daemon (10.0.0.1:35928). Sep 9 00:18:52.027702 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 35928 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:18:52.029807 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:52.035177 systemd-logind[1565]: New session 11 of user core. Sep 9 00:18:52.046220 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:18:52.175716 sshd[4099]: Connection closed by 10.0.0.1 port 35928 Sep 9 00:18:52.176115 sshd-session[4097]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:52.181219 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:35928.service: Deactivated successfully. Sep 9 00:18:52.183936 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:18:52.184808 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:18:52.186118 systemd-logind[1565]: Removed session 11. Sep 9 00:18:57.196271 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:35936.service - OpenSSH per-connection server daemon (10.0.0.1:35936). Sep 9 00:18:57.255505 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 35936 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:18:57.257684 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:57.263433 systemd-logind[1565]: New session 12 of user core. Sep 9 00:18:57.273194 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:18:57.550499 sshd[4116]: Connection closed by 10.0.0.1 port 35936 Sep 9 00:18:57.550783 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:57.555325 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:35936.service: Deactivated successfully. Sep 9 00:18:57.557605 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:18:57.558616 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:18:57.560119 systemd-logind[1565]: Removed session 12. Sep 9 00:19:02.573563 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:39852.service - OpenSSH per-connection server daemon (10.0.0.1:39852). Sep 9 00:19:02.628106 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 39852 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:02.630175 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:02.634777 systemd-logind[1565]: New session 13 of user core. Sep 9 00:19:02.644200 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:19:02.767209 sshd[4133]: Connection closed by 10.0.0.1 port 39852 Sep 9 00:19:02.768230 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:02.783711 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:39852.service: Deactivated successfully. Sep 9 00:19:02.785985 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:19:02.787160 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:19:02.790378 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:39856.service - OpenSSH per-connection server daemon (10.0.0.1:39856). Sep 9 00:19:02.791092 systemd-logind[1565]: Removed session 13. Sep 9 00:19:02.850822 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 39856 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:02.852644 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:02.857583 systemd-logind[1565]: New session 14 of user core. Sep 9 00:19:02.871309 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:19:03.029237 sshd[4149]: Connection closed by 10.0.0.1 port 39856 Sep 9 00:19:03.030915 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:03.044934 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:39856.service: Deactivated successfully. Sep 9 00:19:03.048146 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:19:03.049649 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:19:03.054646 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:39866.service - OpenSSH per-connection server daemon (10.0.0.1:39866). Sep 9 00:19:03.057916 systemd-logind[1565]: Removed session 14. Sep 9 00:19:03.117848 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 39866 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:03.120338 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:03.126364 systemd-logind[1565]: New session 15 of user core. Sep 9 00:19:03.140258 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:19:03.266360 sshd[4163]: Connection closed by 10.0.0.1 port 39866 Sep 9 00:19:03.266702 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:03.270742 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:39866.service: Deactivated successfully. Sep 9 00:19:03.272873 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:19:03.273676 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:19:03.274998 systemd-logind[1565]: Removed session 15. Sep 9 00:19:08.282908 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:39870.service - OpenSSH per-connection server daemon (10.0.0.1:39870). Sep 9 00:19:08.334899 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 39870 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:08.336681 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:08.342269 systemd-logind[1565]: New session 16 of user core. Sep 9 00:19:08.349223 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:19:08.476401 sshd[4180]: Connection closed by 10.0.0.1 port 39870 Sep 9 00:19:08.476800 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:08.483577 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:39870.service: Deactivated successfully. Sep 9 00:19:08.486237 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:19:08.487435 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:19:08.488999 systemd-logind[1565]: Removed session 16. Sep 9 00:19:13.489598 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:40438.service - OpenSSH per-connection server daemon (10.0.0.1:40438). Sep 9 00:19:13.542321 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 40438 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:13.544412 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:13.549864 systemd-logind[1565]: New session 17 of user core. Sep 9 00:19:13.558226 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:19:13.684688 sshd[4195]: Connection closed by 10.0.0.1 port 40438 Sep 9 00:19:13.685091 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:13.690559 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:40438.service: Deactivated successfully. Sep 9 00:19:13.692948 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:19:13.693852 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:19:13.695320 systemd-logind[1565]: Removed session 17. Sep 9 00:19:18.698102 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:40444.service - OpenSSH per-connection server daemon (10.0.0.1:40444). Sep 9 00:19:18.749363 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 40444 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:18.751028 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:18.755870 systemd-logind[1565]: New session 18 of user core. Sep 9 00:19:18.762173 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:19:18.886317 sshd[4215]: Connection closed by 10.0.0.1 port 40444 Sep 9 00:19:18.886727 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:18.898744 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:40444.service: Deactivated successfully. Sep 9 00:19:18.901259 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:19:18.902659 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:19:18.905502 systemd-logind[1565]: Removed session 18. Sep 9 00:19:18.907411 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:40450.service - OpenSSH per-connection server daemon (10.0.0.1:40450). Sep 9 00:19:18.959216 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 40450 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:18.961109 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:18.966075 systemd-logind[1565]: New session 19 of user core. Sep 9 00:19:18.980359 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:19:19.603646 sshd[4230]: Connection closed by 10.0.0.1 port 40450 Sep 9 00:19:19.604242 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:19.617056 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:40450.service: Deactivated successfully. Sep 9 00:19:19.619606 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:19:19.620482 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:19:19.624586 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:40460.service - OpenSSH per-connection server daemon (10.0.0.1:40460). Sep 9 00:19:19.625328 systemd-logind[1565]: Removed session 19. Sep 9 00:19:19.700937 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 40460 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:19.702915 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:19.708430 systemd-logind[1565]: New session 20 of user core. Sep 9 00:19:19.723356 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:19:21.358879 sshd[4244]: Connection closed by 10.0.0.1 port 40460 Sep 9 00:19:21.359344 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:21.372565 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:40460.service: Deactivated successfully. Sep 9 00:19:21.374834 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:19:21.375674 systemd-logind[1565]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:19:21.379644 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:55512.service - OpenSSH per-connection server daemon (10.0.0.1:55512). Sep 9 00:19:21.380710 systemd-logind[1565]: Removed session 20. Sep 9 00:19:21.441628 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 55512 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:21.443848 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:21.449143 systemd-logind[1565]: New session 21 of user core. Sep 9 00:19:21.459353 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:19:21.687900 sshd[4268]: Connection closed by 10.0.0.1 port 55512 Sep 9 00:19:21.688322 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:21.700654 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:55512.service: Deactivated successfully. Sep 9 00:19:21.703518 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:19:21.704863 systemd-logind[1565]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:19:21.709771 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:55514.service - OpenSSH per-connection server daemon (10.0.0.1:55514). Sep 9 00:19:21.710703 systemd-logind[1565]: Removed session 21. Sep 9 00:19:21.767124 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 55514 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:21.768597 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:21.774471 systemd-logind[1565]: New session 22 of user core. Sep 9 00:19:21.788349 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:19:21.994103 sshd[4282]: Connection closed by 10.0.0.1 port 55514 Sep 9 00:19:21.994478 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:21.998530 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:55514.service: Deactivated successfully. Sep 9 00:19:22.000891 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:19:22.002385 systemd-logind[1565]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:19:22.004172 systemd-logind[1565]: Removed session 22. Sep 9 00:19:27.007919 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:55518.service - OpenSSH per-connection server daemon (10.0.0.1:55518). Sep 9 00:19:27.060698 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 55518 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:27.062357 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:27.066618 systemd-logind[1565]: New session 23 of user core. Sep 9 00:19:27.075170 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:19:27.189568 sshd[4298]: Connection closed by 10.0.0.1 port 55518 Sep 9 00:19:27.189929 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:27.194507 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:55518.service: Deactivated successfully. Sep 9 00:19:27.196695 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:19:27.197672 systemd-logind[1565]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:19:27.199125 systemd-logind[1565]: Removed session 23. Sep 9 00:19:31.457690 update_engine[1569]: I20250909 00:19:31.457561 1569 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 9 00:19:31.457690 update_engine[1569]: I20250909 00:19:31.457641 1569 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 9 00:19:31.458398 update_engine[1569]: I20250909 00:19:31.457958 1569 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 9 00:19:31.458601 update_engine[1569]: I20250909 00:19:31.458561 1569 omaha_request_params.cc:62] Current group set to beta Sep 9 00:19:31.459660 update_engine[1569]: I20250909 00:19:31.459450 1569 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 9 00:19:31.459660 update_engine[1569]: I20250909 00:19:31.459470 1569 update_attempter.cc:643] Scheduling an action processor start. Sep 9 00:19:31.459660 update_engine[1569]: I20250909 00:19:31.459493 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 9 00:19:31.462663 locksmithd[1611]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 9 00:19:31.462951 update_engine[1569]: I20250909 00:19:31.462519 1569 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 9 00:19:31.462951 update_engine[1569]: I20250909 00:19:31.462661 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 9 00:19:31.462951 update_engine[1569]: I20250909 00:19:31.462669 1569 omaha_request_action.cc:272] Request: Sep 9 00:19:31.462951 update_engine[1569]: Sep 9 00:19:31.462951 update_engine[1569]: Sep 9 00:19:31.462951 update_engine[1569]: Sep 9 00:19:31.462951 update_engine[1569]: Sep 9 00:19:31.462951 update_engine[1569]: Sep 9 00:19:31.462951 update_engine[1569]: Sep 9 00:19:31.462951 update_engine[1569]: Sep 9 00:19:31.462951 update_engine[1569]: Sep 9 00:19:31.462951 update_engine[1569]: I20250909 00:19:31.462677 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:19:31.466442 update_engine[1569]: I20250909 00:19:31.466404 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:19:31.466850 update_engine[1569]: I20250909 00:19:31.466809 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:19:31.475476 update_engine[1569]: E20250909 00:19:31.475378 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:19:31.475604 update_engine[1569]: I20250909 00:19:31.475512 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 9 00:19:32.208400 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:54998.service - OpenSSH per-connection server daemon (10.0.0.1:54998). Sep 9 00:19:32.268718 sshd[4314]: Accepted publickey for core from 10.0.0.1 port 54998 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:32.271212 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:32.277373 systemd-logind[1565]: New session 24 of user core. Sep 9 00:19:32.289419 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:19:32.405328 sshd[4316]: Connection closed by 10.0.0.1 port 54998 Sep 9 00:19:32.405706 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:32.410460 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:54998.service: Deactivated successfully. Sep 9 00:19:32.413362 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:19:32.414430 systemd-logind[1565]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:19:32.416476 systemd-logind[1565]: Removed session 24. Sep 9 00:19:33.620582 kubelet[2756]: E0909 00:19:33.620522 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:35.619725 kubelet[2756]: E0909 00:19:35.619620 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:35.620349 kubelet[2756]: E0909 00:19:35.619960 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:37.419950 systemd[1]: Started sshd@24-10.0.0.54:22-10.0.0.1:55014.service - OpenSSH per-connection server daemon (10.0.0.1:55014). Sep 9 00:19:37.485152 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 55014 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:37.486930 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:37.491637 systemd-logind[1565]: New session 25 of user core. Sep 9 00:19:37.503289 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:19:37.620194 sshd[4332]: Connection closed by 10.0.0.1 port 55014 Sep 9 00:19:37.620690 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:37.627000 systemd[1]: sshd@24-10.0.0.54:22-10.0.0.1:55014.service: Deactivated successfully. Sep 9 00:19:37.629965 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:19:37.631156 systemd-logind[1565]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:19:37.633434 systemd-logind[1565]: Removed session 25. Sep 9 00:19:41.457076 update_engine[1569]: I20250909 00:19:41.456934 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:19:41.457725 update_engine[1569]: I20250909 00:19:41.457278 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:19:41.457725 update_engine[1569]: I20250909 00:19:41.457571 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:19:41.465106 update_engine[1569]: E20250909 00:19:41.465003 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:19:41.465106 update_engine[1569]: I20250909 00:19:41.465075 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 9 00:19:42.634460 systemd[1]: Started sshd@25-10.0.0.54:22-10.0.0.1:42542.service - OpenSSH per-connection server daemon (10.0.0.1:42542). Sep 9 00:19:42.691939 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 42542 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:42.693801 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:42.698926 systemd-logind[1565]: New session 26 of user core. Sep 9 00:19:42.706252 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:19:42.818899 sshd[4348]: Connection closed by 10.0.0.1 port 42542 Sep 9 00:19:42.819368 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:42.834253 systemd[1]: sshd@25-10.0.0.54:22-10.0.0.1:42542.service: Deactivated successfully. Sep 9 00:19:42.836581 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:19:42.837863 systemd-logind[1565]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:19:42.841560 systemd[1]: Started sshd@26-10.0.0.54:22-10.0.0.1:42554.service - OpenSSH per-connection server daemon (10.0.0.1:42554). Sep 9 00:19:42.842677 systemd-logind[1565]: Removed session 26. Sep 9 00:19:42.888736 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 42554 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:42.890461 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:42.895283 systemd-logind[1565]: New session 27 of user core. Sep 9 00:19:42.905291 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:19:44.273105 containerd[1580]: time="2025-09-09T00:19:44.272021475Z" level=info msg="StopContainer for \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" with timeout 30 (s)" Sep 9 00:19:44.281974 containerd[1580]: time="2025-09-09T00:19:44.281836879Z" level=info msg="Stop container \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" with signal terminated" Sep 9 00:19:44.304109 systemd[1]: cri-containerd-5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920.scope: Deactivated successfully. Sep 9 00:19:44.306907 containerd[1580]: time="2025-09-09T00:19:44.306834510Z" level=info msg="received exit event container_id:\"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" id:\"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" pid:3291 exited_at:{seconds:1757377184 nanos:306334041}" Sep 9 00:19:44.307224 containerd[1580]: time="2025-09-09T00:19:44.307142704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" id:\"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" pid:3291 exited_at:{seconds:1757377184 nanos:306334041}" Sep 9 00:19:44.333476 containerd[1580]: time="2025-09-09T00:19:44.333382230Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:19:44.336369 containerd[1580]: time="2025-09-09T00:19:44.336310390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" id:\"db76dcba202350f73e2003e893a6ccb4dbd5b45fea291227b0dce2c89403a383\" pid:4391 exited_at:{seconds:1757377184 nanos:335659086}" Sep 9 00:19:44.342755 containerd[1580]: time="2025-09-09T00:19:44.342582568Z" level=info msg="StopContainer for \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" with timeout 2 (s)" Sep 9 00:19:44.343335 containerd[1580]: time="2025-09-09T00:19:44.343307563Z" level=info msg="Stop container \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" with signal terminated" Sep 9 00:19:44.348849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920-rootfs.mount: Deactivated successfully. Sep 9 00:19:44.356247 systemd-networkd[1439]: lxc_health: Link DOWN Sep 9 00:19:44.356259 systemd-networkd[1439]: lxc_health: Lost carrier Sep 9 00:19:44.370842 containerd[1580]: time="2025-09-09T00:19:44.370788200Z" level=info msg="StopContainer for \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" returns successfully" Sep 9 00:19:44.373932 containerd[1580]: time="2025-09-09T00:19:44.373857588Z" level=info msg="StopPodSandbox for \"0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5\"" Sep 9 00:19:44.374092 containerd[1580]: time="2025-09-09T00:19:44.373971374Z" level=info msg="Container to stop \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:19:44.385389 systemd[1]: cri-containerd-0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5.scope: Deactivated successfully. Sep 9 00:19:44.387478 systemd[1]: cri-containerd-9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436.scope: Deactivated successfully. Sep 9 00:19:44.387924 systemd[1]: cri-containerd-9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436.scope: Consumed 7.784s CPU time, 126M memory peak, 552K read from disk, 14.8M written to disk. Sep 9 00:19:44.389143 containerd[1580]: time="2025-09-09T00:19:44.389074400Z" level=info msg="received exit event container_id:\"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" id:\"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" pid:3400 exited_at:{seconds:1757377184 nanos:388799299}" Sep 9 00:19:44.389244 containerd[1580]: time="2025-09-09T00:19:44.389082966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" id:\"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" pid:3400 exited_at:{seconds:1757377184 nanos:388799299}" Sep 9 00:19:44.389289 containerd[1580]: time="2025-09-09T00:19:44.389256045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5\" id:\"0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5\" pid:2965 exit_status:137 exited_at:{seconds:1757377184 nanos:388679963}" Sep 9 00:19:44.420208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436-rootfs.mount: Deactivated successfully. Sep 9 00:19:44.432085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5-rootfs.mount: Deactivated successfully. Sep 9 00:19:44.437216 containerd[1580]: time="2025-09-09T00:19:44.437153441Z" level=info msg="shim disconnected" id=0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5 namespace=k8s.io Sep 9 00:19:44.437216 containerd[1580]: time="2025-09-09T00:19:44.437205009Z" level=warning msg="cleaning up after shim disconnected" id=0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5 namespace=k8s.io Sep 9 00:19:44.468090 containerd[1580]: time="2025-09-09T00:19:44.437217112Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:19:44.471422 containerd[1580]: time="2025-09-09T00:19:44.471319010Z" level=info msg="StopContainer for \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" returns successfully" Sep 9 00:19:44.472235 containerd[1580]: time="2025-09-09T00:19:44.472186975Z" level=info msg="StopPodSandbox for \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\"" Sep 9 00:19:44.472469 containerd[1580]: time="2025-09-09T00:19:44.472435586Z" level=info msg="Container to stop \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:19:44.472712 containerd[1580]: time="2025-09-09T00:19:44.472571363Z" level=info msg="Container to stop \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:19:44.472775 containerd[1580]: time="2025-09-09T00:19:44.472717680Z" level=info msg="Container to stop \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:19:44.472775 containerd[1580]: time="2025-09-09T00:19:44.472742738Z" level=info msg="Container to stop \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:19:44.472775 containerd[1580]: time="2025-09-09T00:19:44.472758317Z" level=info msg="Container to stop \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:19:44.480848 systemd[1]: cri-containerd-84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac.scope: Deactivated successfully. Sep 9 00:19:44.508906 containerd[1580]: time="2025-09-09T00:19:44.508853073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" id:\"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" pid:2909 exit_status:137 exited_at:{seconds:1757377184 nanos:483481392}" Sep 9 00:19:44.513948 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5-shm.mount: Deactivated successfully. Sep 9 00:19:44.514745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac-rootfs.mount: Deactivated successfully. Sep 9 00:19:44.521420 containerd[1580]: time="2025-09-09T00:19:44.521353596Z" level=info msg="received exit event sandbox_id:\"0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5\" exit_status:137 exited_at:{seconds:1757377184 nanos:388679963}" Sep 9 00:19:44.529389 containerd[1580]: time="2025-09-09T00:19:44.529244744Z" level=info msg="received exit event sandbox_id:\"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" exit_status:137 exited_at:{seconds:1757377184 nanos:483481392}" Sep 9 00:19:44.530139 containerd[1580]: time="2025-09-09T00:19:44.530098252Z" level=info msg="TearDown network for sandbox \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" successfully" Sep 9 00:19:44.530216 containerd[1580]: time="2025-09-09T00:19:44.530137817Z" level=info msg="StopPodSandbox for \"84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac\" returns successfully" Sep 9 00:19:44.532331 containerd[1580]: time="2025-09-09T00:19:44.532277061Z" level=info msg="shim disconnected" id=84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac namespace=k8s.io Sep 9 00:19:44.532331 containerd[1580]: time="2025-09-09T00:19:44.532314863Z" level=warning msg="cleaning up after shim disconnected" id=84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac namespace=k8s.io Sep 9 00:19:44.532441 containerd[1580]: time="2025-09-09T00:19:44.532327156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:19:44.533056 containerd[1580]: time="2025-09-09T00:19:44.532969504Z" level=info msg="TearDown network for sandbox \"0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5\" successfully" Sep 9 00:19:44.533121 containerd[1580]: time="2025-09-09T00:19:44.533058121Z" level=info msg="StopPodSandbox for \"0323bdab5332ef735c58703eb4fdf4ed309e30875da12633541762bc2a5e25c5\" returns successfully" Sep 9 00:19:44.602124 kubelet[2756]: I0909 00:19:44.601987 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cni-path\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.602124 kubelet[2756]: I0909 00:19:44.602094 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-host-proc-sys-net\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.602124 kubelet[2756]: I0909 00:19:44.602122 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trgfh\" (UniqueName: \"kubernetes.io/projected/62d16909-5b45-464b-ab31-6c23beca80d3-kube-api-access-trgfh\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.602124 kubelet[2756]: I0909 00:19:44.602141 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40fe4b1-c833-48b1-a4bf-c80e09aef469-cilium-config-path\") pod \"f40fe4b1-c833-48b1-a4bf-c80e09aef469\" (UID: \"f40fe4b1-c833-48b1-a4bf-c80e09aef469\") " Sep 9 00:19:44.602124 kubelet[2756]: I0909 00:19:44.602157 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-cgroup\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.602926 kubelet[2756]: I0909 00:19:44.602180 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-lib-modules\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.602926 kubelet[2756]: I0909 00:19:44.602193 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-run\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.602926 kubelet[2756]: I0909 00:19:44.602208 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-bpf-maps\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.602926 kubelet[2756]: I0909 00:19:44.602221 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-host-proc-sys-kernel\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.602926 kubelet[2756]: I0909 00:19:44.602204 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cni-path" (OuterVolumeSpecName: "cni-path") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:19:44.602926 kubelet[2756]: I0909 00:19:44.602235 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-config-path\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.603251 kubelet[2756]: I0909 00:19:44.602338 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8w88\" (UniqueName: \"kubernetes.io/projected/f40fe4b1-c833-48b1-a4bf-c80e09aef469-kube-api-access-h8w88\") pod \"f40fe4b1-c833-48b1-a4bf-c80e09aef469\" (UID: \"f40fe4b1-c833-48b1-a4bf-c80e09aef469\") " Sep 9 00:19:44.603251 kubelet[2756]: I0909 00:19:44.602368 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-hostproc\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.603251 kubelet[2756]: I0909 00:19:44.602388 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-xtables-lock\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.603251 kubelet[2756]: I0909 00:19:44.602408 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-etc-cni-netd\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.603251 kubelet[2756]: I0909 00:19:44.602429 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62d16909-5b45-464b-ab31-6c23beca80d3-clustermesh-secrets\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.603251 kubelet[2756]: I0909 00:19:44.602451 2756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62d16909-5b45-464b-ab31-6c23beca80d3-hubble-tls\") pod \"62d16909-5b45-464b-ab31-6c23beca80d3\" (UID: \"62d16909-5b45-464b-ab31-6c23beca80d3\") " Sep 9 00:19:44.603483 kubelet[2756]: I0909 00:19:44.602502 2756 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.606017 kubelet[2756]: I0909 00:19:44.605949 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:19:44.606458 kubelet[2756]: I0909 00:19:44.606432 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:19:44.606563 kubelet[2756]: I0909 00:19:44.606546 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:19:44.606649 kubelet[2756]: I0909 00:19:44.606629 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:19:44.606733 kubelet[2756]: I0909 00:19:44.606720 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:19:44.606819 kubelet[2756]: I0909 00:19:44.606803 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:19:44.606916 kubelet[2756]: I0909 00:19:44.606902 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:19:44.607202 kubelet[2756]: I0909 00:19:44.607150 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d16909-5b45-464b-ab31-6c23beca80d3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:19:44.607652 kubelet[2756]: I0909 00:19:44.607329 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:19:44.607652 kubelet[2756]: I0909 00:19:44.607293 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62d16909-5b45-464b-ab31-6c23beca80d3-kube-api-access-trgfh" (OuterVolumeSpecName: "kube-api-access-trgfh") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "kube-api-access-trgfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:19:44.607652 kubelet[2756]: I0909 00:19:44.607379 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:19:44.607652 kubelet[2756]: I0909 00:19:44.607405 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-hostproc" (OuterVolumeSpecName: "hostproc") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 00:19:44.607652 kubelet[2756]: I0909 00:19:44.607429 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f40fe4b1-c833-48b1-a4bf-c80e09aef469-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f40fe4b1-c833-48b1-a4bf-c80e09aef469" (UID: "f40fe4b1-c833-48b1-a4bf-c80e09aef469"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:19:44.610885 kubelet[2756]: I0909 00:19:44.610825 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62d16909-5b45-464b-ab31-6c23beca80d3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "62d16909-5b45-464b-ab31-6c23beca80d3" (UID: "62d16909-5b45-464b-ab31-6c23beca80d3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:19:44.610990 kubelet[2756]: I0909 00:19:44.610906 2756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40fe4b1-c833-48b1-a4bf-c80e09aef469-kube-api-access-h8w88" (OuterVolumeSpecName: "kube-api-access-h8w88") pod "f40fe4b1-c833-48b1-a4bf-c80e09aef469" (UID: "f40fe4b1-c833-48b1-a4bf-c80e09aef469"). InnerVolumeSpecName "kube-api-access-h8w88". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:19:44.631299 systemd[1]: Removed slice kubepods-burstable-pod62d16909_5b45_464b_ab31_6c23beca80d3.slice - libcontainer container kubepods-burstable-pod62d16909_5b45_464b_ab31_6c23beca80d3.slice. Sep 9 00:19:44.631505 systemd[1]: kubepods-burstable-pod62d16909_5b45_464b_ab31_6c23beca80d3.slice: Consumed 7.913s CPU time, 126.4M memory peak, 564K read from disk, 14.8M written to disk. Sep 9 00:19:44.635545 systemd[1]: Removed slice kubepods-besteffort-podf40fe4b1_c833_48b1_a4bf_c80e09aef469.slice - libcontainer container kubepods-besteffort-podf40fe4b1_c833_48b1_a4bf_c80e09aef469.slice. Sep 9 00:19:44.703731 kubelet[2756]: I0909 00:19:44.703657 2756 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.703731 kubelet[2756]: I0909 00:19:44.703705 2756 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.703731 kubelet[2756]: I0909 00:19:44.703723 2756 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.703731 kubelet[2756]: I0909 00:19:44.703734 2756 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.703731 kubelet[2756]: I0909 00:19:44.703744 2756 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.704116 kubelet[2756]: I0909 00:19:44.703755 2756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8w88\" (UniqueName: \"kubernetes.io/projected/f40fe4b1-c833-48b1-a4bf-c80e09aef469-kube-api-access-h8w88\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.704116 kubelet[2756]: I0909 00:19:44.703767 2756 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.704116 kubelet[2756]: I0909 00:19:44.703779 2756 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.704116 kubelet[2756]: I0909 00:19:44.703788 2756 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.704116 kubelet[2756]: I0909 00:19:44.703798 2756 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62d16909-5b45-464b-ab31-6c23beca80d3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.704116 kubelet[2756]: I0909 00:19:44.703807 2756 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62d16909-5b45-464b-ab31-6c23beca80d3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.704116 kubelet[2756]: I0909 00:19:44.703815 2756 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.704116 kubelet[2756]: I0909 00:19:44.703851 2756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trgfh\" (UniqueName: \"kubernetes.io/projected/62d16909-5b45-464b-ab31-6c23beca80d3-kube-api-access-trgfh\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.704380 kubelet[2756]: I0909 00:19:44.703863 2756 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f40fe4b1-c833-48b1-a4bf-c80e09aef469-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.704380 kubelet[2756]: I0909 00:19:44.703873 2756 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62d16909-5b45-464b-ab31-6c23beca80d3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 00:19:44.715915 kubelet[2756]: E0909 00:19:44.715856 2756 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:19:44.913605 kubelet[2756]: I0909 00:19:44.913546 2756 scope.go:117] "RemoveContainer" containerID="9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436" Sep 9 00:19:44.915906 containerd[1580]: time="2025-09-09T00:19:44.915850820Z" level=info msg="RemoveContainer for \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\"" Sep 9 00:19:45.050340 containerd[1580]: time="2025-09-09T00:19:45.050179172Z" level=info msg="RemoveContainer for \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" returns successfully" Sep 9 00:19:45.051269 kubelet[2756]: I0909 00:19:45.050788 2756 scope.go:117] "RemoveContainer" containerID="4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da" Sep 9 00:19:45.054229 containerd[1580]: time="2025-09-09T00:19:45.054141890Z" level=info msg="RemoveContainer for \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\"" Sep 9 00:19:45.060949 containerd[1580]: time="2025-09-09T00:19:45.060880531Z" level=info msg="RemoveContainer for \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\" returns successfully" Sep 9 00:19:45.061196 kubelet[2756]: I0909 00:19:45.061156 2756 scope.go:117] "RemoveContainer" containerID="b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08" Sep 9 00:19:45.063641 containerd[1580]: time="2025-09-09T00:19:45.063608020Z" level=info msg="RemoveContainer for \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\"" Sep 9 00:19:45.076173 containerd[1580]: time="2025-09-09T00:19:45.076103548Z" level=info msg="RemoveContainer for \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\" returns successfully" Sep 9 00:19:45.076500 kubelet[2756]: I0909 00:19:45.076465 2756 scope.go:117] "RemoveContainer" containerID="30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411" Sep 9 00:19:45.078131 containerd[1580]: time="2025-09-09T00:19:45.078100573Z" level=info msg="RemoveContainer for \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\"" Sep 9 00:19:45.082976 containerd[1580]: time="2025-09-09T00:19:45.082912782Z" level=info msg="RemoveContainer for \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\" returns successfully" Sep 9 00:19:45.083185 kubelet[2756]: I0909 00:19:45.083142 2756 scope.go:117] "RemoveContainer" containerID="90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798" Sep 9 00:19:45.084842 containerd[1580]: time="2025-09-09T00:19:45.084803686Z" level=info msg="RemoveContainer for \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\"" Sep 9 00:19:45.105333 containerd[1580]: time="2025-09-09T00:19:45.105258068Z" level=info msg="RemoveContainer for \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\" returns successfully" Sep 9 00:19:45.105583 kubelet[2756]: I0909 00:19:45.105551 2756 scope.go:117] "RemoveContainer" containerID="9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436" Sep 9 00:19:45.105910 containerd[1580]: time="2025-09-09T00:19:45.105854918Z" level=error msg="ContainerStatus for \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\": not found" Sep 9 00:19:45.111355 kubelet[2756]: E0909 00:19:45.111314 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\": not found" containerID="9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436" Sep 9 00:19:45.111463 kubelet[2756]: I0909 00:19:45.111367 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436"} err="failed to get container status \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bb876e2646422d8fa9a5529dd49114c54831802ebf672740665c14af8b2f436\": not found" Sep 9 00:19:45.111501 kubelet[2756]: I0909 00:19:45.111465 2756 scope.go:117] "RemoveContainer" containerID="4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da" Sep 9 00:19:45.111748 containerd[1580]: time="2025-09-09T00:19:45.111711237Z" level=error msg="ContainerStatus for \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\": not found" Sep 9 00:19:45.111871 kubelet[2756]: E0909 00:19:45.111846 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\": not found" containerID="4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da" Sep 9 00:19:45.111921 kubelet[2756]: I0909 00:19:45.111875 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da"} err="failed to get container status \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\": rpc error: code = NotFound desc = an error occurred when try to find container \"4047b86f4af245158d39dc05e1ff6da43a2d8ef296355ce8f4cea260372fe6da\": not found" Sep 9 00:19:45.111921 kubelet[2756]: I0909 00:19:45.111893 2756 scope.go:117] "RemoveContainer" containerID="b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08" Sep 9 00:19:45.112111 containerd[1580]: time="2025-09-09T00:19:45.112075267Z" level=error msg="ContainerStatus for \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\": not found" Sep 9 00:19:45.112212 kubelet[2756]: E0909 00:19:45.112182 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\": not found" containerID="b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08" Sep 9 00:19:45.112278 kubelet[2756]: I0909 00:19:45.112212 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08"} err="failed to get container status \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\": rpc error: code = NotFound desc = an error occurred when try to find container \"b725f4c3fd37cabce427ace194eff11edb9264e23c3636a9cff6e8a71889bc08\": not found" Sep 9 00:19:45.112278 kubelet[2756]: I0909 00:19:45.112236 2756 scope.go:117] "RemoveContainer" containerID="30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411" Sep 9 00:19:45.112466 containerd[1580]: time="2025-09-09T00:19:45.112411303Z" level=error msg="ContainerStatus for \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\": not found" Sep 9 00:19:45.112561 kubelet[2756]: E0909 00:19:45.112535 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\": not found" containerID="30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411" Sep 9 00:19:45.112636 kubelet[2756]: I0909 00:19:45.112560 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411"} err="failed to get container status \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\": rpc error: code = NotFound desc = an error occurred when try to find container \"30b5d1308ccbd31ec58822b9045532a246f339f467c623d6d3686a9c38184411\": not found" Sep 9 00:19:45.112636 kubelet[2756]: I0909 00:19:45.112575 2756 scope.go:117] "RemoveContainer" containerID="90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798" Sep 9 00:19:45.112745 containerd[1580]: time="2025-09-09T00:19:45.112718776Z" level=error msg="ContainerStatus for \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\": not found" Sep 9 00:19:45.112851 kubelet[2756]: E0909 00:19:45.112818 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\": not found" containerID="90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798" Sep 9 00:19:45.112894 kubelet[2756]: I0909 00:19:45.112847 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798"} err="failed to get container status \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\": rpc error: code = NotFound desc = an error occurred when try to find container \"90386877b7b6f2304500552f8e0073c2f065bbd23dcd7063f4aacb275de7c798\": not found" Sep 9 00:19:45.112894 kubelet[2756]: I0909 00:19:45.112871 2756 scope.go:117] "RemoveContainer" containerID="5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920" Sep 9 00:19:45.115856 containerd[1580]: time="2025-09-09T00:19:45.115798683Z" level=info msg="RemoveContainer for \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\"" Sep 9 00:19:45.121652 containerd[1580]: time="2025-09-09T00:19:45.121584747Z" level=info msg="RemoveContainer for \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" returns successfully" Sep 9 00:19:45.121899 kubelet[2756]: I0909 00:19:45.121867 2756 scope.go:117] "RemoveContainer" containerID="5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920" Sep 9 00:19:45.122206 containerd[1580]: time="2025-09-09T00:19:45.122119631Z" level=error msg="ContainerStatus for \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\": not found" Sep 9 00:19:45.122412 kubelet[2756]: E0909 00:19:45.122254 2756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\": not found" containerID="5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920" Sep 9 00:19:45.122412 kubelet[2756]: I0909 00:19:45.122284 2756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920"} err="failed to get container status \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\": rpc error: code = NotFound desc = an error occurred when try to find container \"5861178676eca7f29ec27f69a3d684b602f0401f81fbf60ae62a87f5535ea920\": not found" Sep 9 00:19:45.347831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84fecb115a0d8c220944e7fb3d95fe609b88427815b65555c95de8fbbebf3bac-shm.mount: Deactivated successfully. Sep 9 00:19:45.347965 systemd[1]: var-lib-kubelet-pods-f40fe4b1\x2dc833\x2d48b1\x2da4bf\x2dc80e09aef469-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh8w88.mount: Deactivated successfully. Sep 9 00:19:45.348077 systemd[1]: var-lib-kubelet-pods-62d16909\x2d5b45\x2d464b\x2dab31\x2d6c23beca80d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtrgfh.mount: Deactivated successfully. Sep 9 00:19:45.348168 systemd[1]: var-lib-kubelet-pods-62d16909\x2d5b45\x2d464b\x2dab31\x2d6c23beca80d3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:19:45.348239 systemd[1]: var-lib-kubelet-pods-62d16909\x2d5b45\x2d464b\x2dab31\x2d6c23beca80d3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:19:46.005097 kubelet[2756]: I0909 00:19:46.004999 2756 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:19:46Z","lastTransitionTime":"2025-09-09T00:19:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:19:46.221926 sshd[4364]: Connection closed by 10.0.0.1 port 42554 Sep 9 00:19:46.222673 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:46.239722 systemd[1]: sshd@26-10.0.0.54:22-10.0.0.1:42554.service: Deactivated successfully. Sep 9 00:19:46.242250 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:19:46.243180 systemd-logind[1565]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:19:46.246800 systemd[1]: Started sshd@27-10.0.0.54:22-10.0.0.1:42564.service - OpenSSH per-connection server daemon (10.0.0.1:42564). Sep 9 00:19:46.247737 systemd-logind[1565]: Removed session 27. Sep 9 00:19:46.308181 sshd[4513]: Accepted publickey for core from 10.0.0.1 port 42564 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:46.310391 sshd-session[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:46.316409 systemd-logind[1565]: New session 28 of user core. Sep 9 00:19:46.325280 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 00:19:46.622258 kubelet[2756]: I0909 00:19:46.622201 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62d16909-5b45-464b-ab31-6c23beca80d3" path="/var/lib/kubelet/pods/62d16909-5b45-464b-ab31-6c23beca80d3/volumes" Sep 9 00:19:46.623286 kubelet[2756]: I0909 00:19:46.623243 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40fe4b1-c833-48b1-a4bf-c80e09aef469" path="/var/lib/kubelet/pods/f40fe4b1-c833-48b1-a4bf-c80e09aef469/volumes" Sep 9 00:19:47.144144 sshd[4517]: Connection closed by 10.0.0.1 port 42564 Sep 9 00:19:47.144732 sshd-session[4513]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:47.160671 systemd[1]: sshd@27-10.0.0.54:22-10.0.0.1:42564.service: Deactivated successfully. Sep 9 00:19:47.164496 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 00:19:47.165667 systemd-logind[1565]: Session 28 logged out. Waiting for processes to exit. Sep 9 00:19:47.169673 systemd[1]: Started sshd@28-10.0.0.54:22-10.0.0.1:42574.service - OpenSSH per-connection server daemon (10.0.0.1:42574). Sep 9 00:19:47.171133 systemd-logind[1565]: Removed session 28. Sep 9 00:19:47.228508 sshd[4529]: Accepted publickey for core from 10.0.0.1 port 42574 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:47.230308 sshd-session[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:47.235775 systemd-logind[1565]: New session 29 of user core. Sep 9 00:19:47.251228 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 00:19:47.305502 sshd[4531]: Connection closed by 10.0.0.1 port 42574 Sep 9 00:19:47.305886 sshd-session[4529]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:47.310879 kubelet[2756]: E0909 00:19:47.310827 2756 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62d16909-5b45-464b-ab31-6c23beca80d3" containerName="apply-sysctl-overwrites" Sep 9 00:19:47.310879 kubelet[2756]: E0909 00:19:47.310872 2756 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f40fe4b1-c833-48b1-a4bf-c80e09aef469" containerName="cilium-operator" Sep 9 00:19:47.310879 kubelet[2756]: E0909 00:19:47.310881 2756 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62d16909-5b45-464b-ab31-6c23beca80d3" containerName="mount-bpf-fs" Sep 9 00:19:47.311731 kubelet[2756]: E0909 00:19:47.310892 2756 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62d16909-5b45-464b-ab31-6c23beca80d3" containerName="cilium-agent" Sep 9 00:19:47.311731 kubelet[2756]: E0909 00:19:47.310902 2756 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62d16909-5b45-464b-ab31-6c23beca80d3" containerName="mount-cgroup" Sep 9 00:19:47.311731 kubelet[2756]: E0909 00:19:47.310910 2756 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="62d16909-5b45-464b-ab31-6c23beca80d3" containerName="clean-cilium-state" Sep 9 00:19:47.311731 kubelet[2756]: I0909 00:19:47.310950 2756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40fe4b1-c833-48b1-a4bf-c80e09aef469" containerName="cilium-operator" Sep 9 00:19:47.311731 kubelet[2756]: I0909 00:19:47.310960 2756 memory_manager.go:354] "RemoveStaleState removing state" podUID="62d16909-5b45-464b-ab31-6c23beca80d3" containerName="cilium-agent" Sep 9 00:19:47.317561 systemd[1]: sshd@28-10.0.0.54:22-10.0.0.1:42574.service: Deactivated successfully. Sep 9 00:19:47.324022 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 00:19:47.326592 systemd-logind[1565]: Session 29 logged out. Waiting for processes to exit. Sep 9 00:19:47.336356 systemd[1]: Started sshd@29-10.0.0.54:22-10.0.0.1:42590.service - OpenSSH per-connection server daemon (10.0.0.1:42590). Sep 9 00:19:47.338731 systemd-logind[1565]: Removed session 29. Sep 9 00:19:47.350812 systemd[1]: Created slice kubepods-burstable-pod09b6cb84_e4d4_4dbc_b916_d02dfb3f6baf.slice - libcontainer container kubepods-burstable-pod09b6cb84_e4d4_4dbc_b916_d02dfb3f6baf.slice. Sep 9 00:19:47.391842 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 42590 ssh2: RSA SHA256:71pIFbyCQBIroIGqc5DeH9snrZBBxVf1ertHDrOSKjM Sep 9 00:19:47.393777 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:19:47.399091 systemd-logind[1565]: New session 30 of user core. Sep 9 00:19:47.409275 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 00:19:47.425469 kubelet[2756]: I0909 00:19:47.425412 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-cni-path\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425469 kubelet[2756]: I0909 00:19:47.425450 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-host-proc-sys-net\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425469 kubelet[2756]: I0909 00:19:47.425469 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-cilium-run\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425688 kubelet[2756]: I0909 00:19:47.425492 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-bpf-maps\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425688 kubelet[2756]: I0909 00:19:47.425513 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-hostproc\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425688 kubelet[2756]: I0909 00:19:47.425531 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-xtables-lock\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425688 kubelet[2756]: I0909 00:19:47.425561 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-cilium-cgroup\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425688 kubelet[2756]: I0909 00:19:47.425583 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-clustermesh-secrets\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425688 kubelet[2756]: I0909 00:19:47.425619 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-cilium-config-path\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425834 kubelet[2756]: I0909 00:19:47.425653 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-cilium-ipsec-secrets\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425834 kubelet[2756]: I0909 00:19:47.425682 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-etc-cni-netd\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425834 kubelet[2756]: I0909 00:19:47.425698 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-lib-modules\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425834 kubelet[2756]: I0909 00:19:47.425736 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4fwt\" (UniqueName: \"kubernetes.io/projected/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-kube-api-access-j4fwt\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425834 kubelet[2756]: I0909 00:19:47.425783 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-host-proc-sys-kernel\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.425956 kubelet[2756]: I0909 00:19:47.425822 2756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf-hubble-tls\") pod \"cilium-qh4bn\" (UID: \"09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf\") " pod="kube-system/cilium-qh4bn" Sep 9 00:19:47.655848 kubelet[2756]: E0909 00:19:47.655654 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.656580 containerd[1580]: time="2025-09-09T00:19:47.656358270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qh4bn,Uid:09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:47.677552 containerd[1580]: time="2025-09-09T00:19:47.677497583Z" level=info msg="connecting to shim 0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60" address="unix:///run/containerd/s/91f72e2a716d43e672435960ee25c81104bbdd92930e3d7c6f5fd240bb73d6c4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:19:47.714345 systemd[1]: Started cri-containerd-0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60.scope - libcontainer container 0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60. Sep 9 00:19:47.744994 containerd[1580]: time="2025-09-09T00:19:47.744919239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qh4bn,Uid:09b6cb84-e4d4-4dbc-b916-d02dfb3f6baf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\"" Sep 9 00:19:47.745694 kubelet[2756]: E0909 00:19:47.745662 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.749181 containerd[1580]: time="2025-09-09T00:19:47.749137298Z" level=info msg="CreateContainer within sandbox \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:19:47.757357 containerd[1580]: time="2025-09-09T00:19:47.757298879Z" level=info msg="Container 84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:19:47.774491 containerd[1580]: time="2025-09-09T00:19:47.774427605Z" level=info msg="CreateContainer within sandbox \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0\"" Sep 9 00:19:47.775027 containerd[1580]: time="2025-09-09T00:19:47.774997956Z" level=info msg="StartContainer for \"84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0\"" Sep 9 00:19:47.790685 containerd[1580]: time="2025-09-09T00:19:47.790618795Z" level=info msg="connecting to shim 84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0" address="unix:///run/containerd/s/91f72e2a716d43e672435960ee25c81104bbdd92930e3d7c6f5fd240bb73d6c4" protocol=ttrpc version=3 Sep 9 00:19:47.821353 systemd[1]: Started cri-containerd-84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0.scope - libcontainer container 84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0. Sep 9 00:19:47.863687 containerd[1580]: time="2025-09-09T00:19:47.863637182Z" level=info msg="StartContainer for \"84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0\" returns successfully" Sep 9 00:19:47.875227 systemd[1]: cri-containerd-84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0.scope: Deactivated successfully. Sep 9 00:19:47.876300 containerd[1580]: time="2025-09-09T00:19:47.876255464Z" level=info msg="received exit event container_id:\"84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0\" id:\"84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0\" pid:4609 exited_at:{seconds:1757377187 nanos:875905392}" Sep 9 00:19:47.876573 containerd[1580]: time="2025-09-09T00:19:47.876541706Z" level=info msg="TaskExit event in podsandbox handler container_id:\"84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0\" id:\"84b5166794190d6fff907d48eadb1d6a91e2d0f4b914d85b6bf1bd309b8f98a0\" pid:4609 exited_at:{seconds:1757377187 nanos:875905392}" Sep 9 00:19:47.925879 kubelet[2756]: E0909 00:19:47.925726 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:48.619283 kubelet[2756]: E0909 00:19:48.619235 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:48.929230 kubelet[2756]: E0909 00:19:48.929078 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:48.931486 containerd[1580]: time="2025-09-09T00:19:48.931441572Z" level=info msg="CreateContainer within sandbox \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:19:49.004026 containerd[1580]: time="2025-09-09T00:19:49.003944916Z" level=info msg="Container 28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:19:49.008900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374874120.mount: Deactivated successfully. Sep 9 00:19:49.014880 containerd[1580]: time="2025-09-09T00:19:49.014826590Z" level=info msg="CreateContainer within sandbox \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7\"" Sep 9 00:19:49.016649 containerd[1580]: time="2025-09-09T00:19:49.015405646Z" level=info msg="StartContainer for \"28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7\"" Sep 9 00:19:49.016649 containerd[1580]: time="2025-09-09T00:19:49.016385362Z" level=info msg="connecting to shim 28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7" address="unix:///run/containerd/s/91f72e2a716d43e672435960ee25c81104bbdd92930e3d7c6f5fd240bb73d6c4" protocol=ttrpc version=3 Sep 9 00:19:49.047273 systemd[1]: Started cri-containerd-28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7.scope - libcontainer container 28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7. Sep 9 00:19:49.087259 systemd[1]: cri-containerd-28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7.scope: Deactivated successfully. Sep 9 00:19:49.087755 containerd[1580]: time="2025-09-09T00:19:49.087710991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7\" id:\"28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7\" pid:4656 exited_at:{seconds:1757377189 nanos:87358092}" Sep 9 00:19:49.111994 containerd[1580]: time="2025-09-09T00:19:49.111929350Z" level=info msg="received exit event container_id:\"28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7\" id:\"28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7\" pid:4656 exited_at:{seconds:1757377189 nanos:87358092}" Sep 9 00:19:49.113292 containerd[1580]: time="2025-09-09T00:19:49.113248066Z" level=info msg="StartContainer for \"28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7\" returns successfully" Sep 9 00:19:49.135470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28e5eda7784bf744292b819dfeb23aa47e5f4145c3073e9f4b7d94ac256259c7-rootfs.mount: Deactivated successfully. Sep 9 00:19:49.717496 kubelet[2756]: E0909 00:19:49.717450 2756 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:19:49.933818 kubelet[2756]: E0909 00:19:49.933760 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:49.935660 containerd[1580]: time="2025-09-09T00:19:49.935607918Z" level=info msg="CreateContainer within sandbox \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:19:49.946446 containerd[1580]: time="2025-09-09T00:19:49.946385687Z" level=info msg="Container 846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:19:49.955486 containerd[1580]: time="2025-09-09T00:19:49.955423209Z" level=info msg="CreateContainer within sandbox \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201\"" Sep 9 00:19:49.956009 containerd[1580]: time="2025-09-09T00:19:49.955948524Z" level=info msg="StartContainer for \"846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201\"" Sep 9 00:19:49.957655 containerd[1580]: time="2025-09-09T00:19:49.957622904Z" level=info msg="connecting to shim 846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201" address="unix:///run/containerd/s/91f72e2a716d43e672435960ee25c81104bbdd92930e3d7c6f5fd240bb73d6c4" protocol=ttrpc version=3 Sep 9 00:19:49.986342 systemd[1]: Started cri-containerd-846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201.scope - libcontainer container 846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201. Sep 9 00:19:50.032742 containerd[1580]: time="2025-09-09T00:19:50.032684777Z" level=info msg="StartContainer for \"846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201\" returns successfully" Sep 9 00:19:50.036017 systemd[1]: cri-containerd-846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201.scope: Deactivated successfully. Sep 9 00:19:50.037104 containerd[1580]: time="2025-09-09T00:19:50.036952808Z" level=info msg="received exit event container_id:\"846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201\" id:\"846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201\" pid:4699 exited_at:{seconds:1757377190 nanos:36724445}" Sep 9 00:19:50.037104 containerd[1580]: time="2025-09-09T00:19:50.037074588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201\" id:\"846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201\" pid:4699 exited_at:{seconds:1757377190 nanos:36724445}" Sep 9 00:19:50.060000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-846bcedc311a51bd20207c83b4bd11d8ed8fb868af915397afb2a6242a992201-rootfs.mount: Deactivated successfully. Sep 9 00:19:50.941927 kubelet[2756]: E0909 00:19:50.941861 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:50.944403 containerd[1580]: time="2025-09-09T00:19:50.944356814Z" level=info msg="CreateContainer within sandbox \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:19:50.954926 containerd[1580]: time="2025-09-09T00:19:50.954865349Z" level=info msg="Container 0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:19:50.966513 containerd[1580]: time="2025-09-09T00:19:50.966430304Z" level=info msg="CreateContainer within sandbox \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f\"" Sep 9 00:19:50.967100 containerd[1580]: time="2025-09-09T00:19:50.967062471Z" level=info msg="StartContainer for \"0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f\"" Sep 9 00:19:50.968184 containerd[1580]: time="2025-09-09T00:19:50.968152304Z" level=info msg="connecting to shim 0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f" address="unix:///run/containerd/s/91f72e2a716d43e672435960ee25c81104bbdd92930e3d7c6f5fd240bb73d6c4" protocol=ttrpc version=3 Sep 9 00:19:50.995210 systemd[1]: Started cri-containerd-0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f.scope - libcontainer container 0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f. Sep 9 00:19:51.027610 systemd[1]: cri-containerd-0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f.scope: Deactivated successfully. Sep 9 00:19:51.028221 containerd[1580]: time="2025-09-09T00:19:51.028176664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f\" id:\"0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f\" pid:4742 exited_at:{seconds:1757377191 nanos:27815210}" Sep 9 00:19:51.028995 containerd[1580]: time="2025-09-09T00:19:51.028959155Z" level=info msg="received exit event container_id:\"0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f\" id:\"0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f\" pid:4742 exited_at:{seconds:1757377191 nanos:27815210}" Sep 9 00:19:51.037447 containerd[1580]: time="2025-09-09T00:19:51.037393531Z" level=info msg="StartContainer for \"0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f\" returns successfully" Sep 9 00:19:51.055678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ecaef876a5aff1aa324c210f75ffc335d3e1b8f7bcc3e9ad996f49fc5dd843f-rootfs.mount: Deactivated successfully. Sep 9 00:19:51.453612 update_engine[1569]: I20250909 00:19:51.453517 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:19:51.454119 update_engine[1569]: I20250909 00:19:51.454010 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:19:51.454434 update_engine[1569]: I20250909 00:19:51.454397 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:19:51.466808 update_engine[1569]: E20250909 00:19:51.466722 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:19:51.466927 update_engine[1569]: I20250909 00:19:51.466816 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 9 00:19:51.948443 kubelet[2756]: E0909 00:19:51.948401 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:51.950960 containerd[1580]: time="2025-09-09T00:19:51.950903104Z" level=info msg="CreateContainer within sandbox \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:19:51.977622 containerd[1580]: time="2025-09-09T00:19:51.977558058Z" level=info msg="Container f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:19:51.987223 containerd[1580]: time="2025-09-09T00:19:51.987149834Z" level=info msg="CreateContainer within sandbox \"0b024515b4c8f4a433b7fe8448e82475d4f71019d1d01a5e1aa87c8d54474c60\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db\"" Sep 9 00:19:51.987803 containerd[1580]: time="2025-09-09T00:19:51.987760160Z" level=info msg="StartContainer for \"f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db\"" Sep 9 00:19:51.988833 containerd[1580]: time="2025-09-09T00:19:51.988804797Z" level=info msg="connecting to shim f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db" address="unix:///run/containerd/s/91f72e2a716d43e672435960ee25c81104bbdd92930e3d7c6f5fd240bb73d6c4" protocol=ttrpc version=3 Sep 9 00:19:52.019304 systemd[1]: Started cri-containerd-f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db.scope - libcontainer container f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db. Sep 9 00:19:52.061847 containerd[1580]: time="2025-09-09T00:19:52.061800110Z" level=info msg="StartContainer for \"f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db\" returns successfully" Sep 9 00:19:52.142948 containerd[1580]: time="2025-09-09T00:19:52.142883634Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db\" id:\"0f15d07bad6b6437740997b6139eb4e81f4bb52787eda8f52adbd8234b1f3423\" pid:4811 exited_at:{seconds:1757377192 nanos:142483377}" Sep 9 00:19:52.736265 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 00:19:52.965909 kubelet[2756]: E0909 00:19:52.965035 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:53.000407 kubelet[2756]: I0909 00:19:52.998093 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qh4bn" podStartSLOduration=5.998067059 podStartE2EDuration="5.998067059s" podCreationTimestamp="2025-09-09 00:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:52.996611334 +0000 UTC m=+98.513772620" watchObservedRunningTime="2025-09-09 00:19:52.998067059 +0000 UTC m=+98.515228315" Sep 9 00:19:53.877594 containerd[1580]: time="2025-09-09T00:19:53.877482442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db\" id:\"e93da9ac085d588e84d5dcbbc02323a9897f011b9faa6090cdd43189e3dac62f\" pid:4886 exit_status:1 exited_at:{seconds:1757377193 nanos:876685323}" Sep 9 00:19:53.965692 kubelet[2756]: E0909 00:19:53.965623 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:56.012367 containerd[1580]: time="2025-09-09T00:19:56.012310803Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db\" id:\"36f0379a0389ab8f8adb312343281f22cacb333e6a7a75f5a1dfe5fbd619b565\" pid:5245 exit_status:1 exited_at:{seconds:1757377196 nanos:11886080}" Sep 9 00:19:56.356231 systemd-networkd[1439]: lxc_health: Link UP Sep 9 00:19:56.360598 systemd-networkd[1439]: lxc_health: Gained carrier Sep 9 00:19:57.625377 systemd-networkd[1439]: lxc_health: Gained IPv6LL Sep 9 00:19:57.658018 kubelet[2756]: E0909 00:19:57.657975 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:57.974411 kubelet[2756]: E0909 00:19:57.974235 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:58.119771 containerd[1580]: time="2025-09-09T00:19:58.119670649Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db\" id:\"0ffba97720a5d1f501bc0c01fb2a281a7b7c3fb6ee258dacab1d44bd5a2f0e1c\" pid:5370 exited_at:{seconds:1757377198 nanos:119257689}" Sep 9 00:19:58.977070 kubelet[2756]: E0909 00:19:58.976970 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:00.259510 containerd[1580]: time="2025-09-09T00:20:00.259438668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db\" id:\"6c93a16f9449dfdb706473ac9a65b6dbe6a56aebf23d802dd8c6f2d9498bbadd\" pid:5403 exited_at:{seconds:1757377200 nanos:258792947}" Sep 9 00:20:01.454597 update_engine[1569]: I20250909 00:20:01.454331 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:20:01.455187 update_engine[1569]: I20250909 00:20:01.454805 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:20:01.455228 update_engine[1569]: I20250909 00:20:01.455184 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:20:01.476934 update_engine[1569]: E20250909 00:20:01.476374 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:20:01.476934 update_engine[1569]: I20250909 00:20:01.476495 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 9 00:20:01.476934 update_engine[1569]: I20250909 00:20:01.476511 1569 omaha_request_action.cc:617] Omaha request response: Sep 9 00:20:01.477332 update_engine[1569]: E20250909 00:20:01.477289 1569 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 9 00:20:01.477393 update_engine[1569]: I20250909 00:20:01.477370 1569 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 9 00:20:01.477393 update_engine[1569]: I20250909 00:20:01.477384 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 00:20:01.477450 update_engine[1569]: I20250909 00:20:01.477392 1569 update_attempter.cc:306] Processing Done. Sep 9 00:20:01.477450 update_engine[1569]: E20250909 00:20:01.477419 1569 update_attempter.cc:619] Update failed. Sep 9 00:20:01.477450 update_engine[1569]: I20250909 00:20:01.477428 1569 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 9 00:20:01.477450 update_engine[1569]: I20250909 00:20:01.477437 1569 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 9 00:20:01.477450 update_engine[1569]: I20250909 00:20:01.477447 1569 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 9 00:20:01.477592 update_engine[1569]: I20250909 00:20:01.477553 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 9 00:20:01.477620 update_engine[1569]: I20250909 00:20:01.477591 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 9 00:20:01.477620 update_engine[1569]: I20250909 00:20:01.477602 1569 omaha_request_action.cc:272] Request: Sep 9 00:20:01.477620 update_engine[1569]: Sep 9 00:20:01.477620 update_engine[1569]: Sep 9 00:20:01.477620 update_engine[1569]: Sep 9 00:20:01.477620 update_engine[1569]: Sep 9 00:20:01.477620 update_engine[1569]: Sep 9 00:20:01.477620 update_engine[1569]: Sep 9 00:20:01.477620 update_engine[1569]: I20250909 00:20:01.477612 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:20:01.478028 update_engine[1569]: I20250909 00:20:01.477871 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:20:01.478328 update_engine[1569]: I20250909 00:20:01.478199 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:20:01.478956 locksmithd[1611]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 9 00:20:01.495195 update_engine[1569]: E20250909 00:20:01.495023 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:20:01.495304 update_engine[1569]: I20250909 00:20:01.495213 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 9 00:20:01.495304 update_engine[1569]: I20250909 00:20:01.495227 1569 omaha_request_action.cc:617] Omaha request response: Sep 9 00:20:01.495304 update_engine[1569]: I20250909 00:20:01.495239 1569 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 00:20:01.495304 update_engine[1569]: I20250909 00:20:01.495249 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 00:20:01.495304 update_engine[1569]: I20250909 00:20:01.495258 1569 update_attempter.cc:306] Processing Done. Sep 9 00:20:01.495304 update_engine[1569]: I20250909 00:20:01.495268 1569 update_attempter.cc:310] Error event sent. Sep 9 00:20:01.495476 update_engine[1569]: I20250909 00:20:01.495301 1569 update_check_scheduler.cc:74] Next update check in 49m50s Sep 9 00:20:01.496093 locksmithd[1611]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 9 00:20:02.359243 containerd[1580]: time="2025-09-09T00:20:02.358973868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db\" id:\"82d0e7e5d77548ceb6bbf95a9c4acc5b8e8503bb9d03e8f7c59a6182c2d5fb73\" pid:5428 exited_at:{seconds:1757377202 nanos:358652861}" Sep 9 00:20:04.466800 containerd[1580]: time="2025-09-09T00:20:04.466720749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f12c01fd3fa14ca35049e2bf36b0c7a4e6ff10e2e496416eb8f904a2d0df67db\" id:\"fd6b2031a40c04e56dd0c6bf5ccb18c8394bbeaf21a5ed2946bf8ad8ec1655de\" pid:5452 exited_at:{seconds:1757377204 nanos:466241673}" Sep 9 00:20:04.506150 sshd[4541]: Connection closed by 10.0.0.1 port 42590 Sep 9 00:20:04.516605 sshd-session[4539]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:04.521420 systemd[1]: sshd@29-10.0.0.54:22-10.0.0.1:42590.service: Deactivated successfully. Sep 9 00:20:04.524123 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 00:20:04.525152 systemd-logind[1565]: Session 30 logged out. Waiting for processes to exit. Sep 9 00:20:04.526991 systemd-logind[1565]: Removed session 30.