Sep 9 22:10:15.008830 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 19:55:16 -00 2025 Sep 9 22:10:15.008881 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 22:10:15.008895 kernel: BIOS-provided physical RAM map: Sep 9 22:10:15.008904 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 22:10:15.008913 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 22:10:15.008921 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 22:10:15.008931 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 9 22:10:15.008940 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 9 22:10:15.008958 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 22:10:15.008967 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 9 22:10:15.008976 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 22:10:15.008984 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 22:10:15.008993 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 22:10:15.009001 kernel: NX (Execute Disable) protection: active Sep 9 22:10:15.009016 kernel: APIC: Static calls initialized Sep 9 22:10:15.009025 kernel: SMBIOS 2.8 present. Sep 9 22:10:15.009039 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 9 22:10:15.009061 kernel: DMI: Memory slots populated: 1/1 Sep 9 22:10:15.009082 kernel: Hypervisor detected: KVM Sep 9 22:10:15.009098 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 22:10:15.009125 kernel: kvm-clock: using sched offset of 9290686970 cycles Sep 9 22:10:15.009136 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 22:10:15.009152 kernel: tsc: Detected 2794.748 MHz processor Sep 9 22:10:15.009170 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 22:10:15.009179 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 22:10:15.009188 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 9 22:10:15.009198 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 22:10:15.009207 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 22:10:15.009216 kernel: Using GB pages for direct mapping Sep 9 22:10:15.009225 kernel: ACPI: Early table checksum verification disabled Sep 9 22:10:15.009234 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 9 22:10:15.009243 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 22:10:15.009254 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 22:10:15.009270 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 22:10:15.009279 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 9 22:10:15.009288 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 22:10:15.009297 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 22:10:15.009306 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 22:10:15.009315 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 22:10:15.009325 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 9 22:10:15.009344 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 9 22:10:15.009354 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 9 22:10:15.009367 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 9 22:10:15.009377 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 9 22:10:15.009387 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 9 22:10:15.009396 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 9 22:10:15.009409 kernel: No NUMA configuration found Sep 9 22:10:15.009419 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 9 22:10:15.009429 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 9 22:10:15.009439 kernel: Zone ranges: Sep 9 22:10:15.009449 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 22:10:15.009460 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 9 22:10:15.009470 kernel: Normal empty Sep 9 22:10:15.009479 kernel: Device empty Sep 9 22:10:15.009489 kernel: Movable zone start for each node Sep 9 22:10:15.009500 kernel: Early memory node ranges Sep 9 22:10:15.009513 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 22:10:15.009523 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 9 22:10:15.009533 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 9 22:10:15.009543 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 22:10:15.009553 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 22:10:15.009563 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 9 22:10:15.009573 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 22:10:15.009588 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 22:10:15.009598 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 22:10:15.009612 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 22:10:15.009622 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 22:10:15.009635 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 22:10:15.009645 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 22:10:15.009656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 22:10:15.009666 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 22:10:15.009676 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 22:10:15.009686 kernel: TSC deadline timer available Sep 9 22:10:15.009696 kernel: CPU topo: Max. logical packages: 1 Sep 9 22:10:15.009730 kernel: CPU topo: Max. logical dies: 1 Sep 9 22:10:15.009740 kernel: CPU topo: Max. dies per package: 1 Sep 9 22:10:15.009750 kernel: CPU topo: Max. threads per core: 1 Sep 9 22:10:15.009760 kernel: CPU topo: Num. cores per package: 4 Sep 9 22:10:15.009769 kernel: CPU topo: Num. threads per package: 4 Sep 9 22:10:15.009780 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 22:10:15.009790 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 22:10:15.009799 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 22:10:15.009809 kernel: kvm-guest: setup PV sched yield Sep 9 22:10:15.009823 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 9 22:10:15.009839 kernel: Booting paravirtualized kernel on KVM Sep 9 22:10:15.009850 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 22:10:15.009860 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 22:10:15.009870 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 22:10:15.009880 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 22:10:15.009890 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 22:10:15.009899 kernel: kvm-guest: PV spinlocks enabled Sep 9 22:10:15.009918 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 22:10:15.009942 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 22:10:15.009954 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 22:10:15.009963 kernel: random: crng init done Sep 9 22:10:15.009973 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 22:10:15.009983 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 22:10:15.009994 kernel: Fallback order for Node 0: 0 Sep 9 22:10:15.010004 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 9 22:10:15.010014 kernel: Policy zone: DMA32 Sep 9 22:10:15.010024 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 22:10:15.010038 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 22:10:15.010048 kernel: ftrace: allocating 40102 entries in 157 pages Sep 9 22:10:15.010058 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 22:10:15.010068 kernel: Dynamic Preempt: voluntary Sep 9 22:10:15.010078 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 22:10:15.010090 kernel: rcu: RCU event tracing is enabled. Sep 9 22:10:15.010100 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 22:10:15.010121 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 22:10:15.010136 kernel: Rude variant of Tasks RCU enabled. Sep 9 22:10:15.010151 kernel: Tracing variant of Tasks RCU enabled. Sep 9 22:10:15.010162 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 22:10:15.010172 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 22:10:15.010183 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 22:10:15.010193 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 22:10:15.010205 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 22:10:15.010221 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 22:10:15.010243 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 22:10:15.010268 kernel: Console: colour VGA+ 80x25 Sep 9 22:10:15.010279 kernel: printk: legacy console [ttyS0] enabled Sep 9 22:10:15.010291 kernel: ACPI: Core revision 20240827 Sep 9 22:10:15.010308 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 22:10:15.010319 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 22:10:15.010329 kernel: x2apic enabled Sep 9 22:10:15.010339 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 22:10:15.010354 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 22:10:15.010364 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 22:10:15.010378 kernel: kvm-guest: setup PV IPIs Sep 9 22:10:15.010396 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 22:10:15.010407 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 22:10:15.010420 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 22:10:15.010430 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 22:10:15.010440 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 22:10:15.010450 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 22:10:15.010460 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 22:10:15.010475 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 22:10:15.010486 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 22:10:15.010497 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 22:10:15.010506 kernel: active return thunk: retbleed_return_thunk Sep 9 22:10:15.010516 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 22:10:15.010526 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 22:10:15.010537 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 22:10:15.010547 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 22:10:15.010558 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 22:10:15.010572 kernel: active return thunk: srso_return_thunk Sep 9 22:10:15.010583 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 22:10:15.010593 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 22:10:15.010604 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 22:10:15.010614 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 22:10:15.010625 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 22:10:15.010635 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 22:10:15.010647 kernel: Freeing SMP alternatives memory: 32K Sep 9 22:10:15.010663 kernel: pid_max: default: 32768 minimum: 301 Sep 9 22:10:15.010675 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 22:10:15.010687 kernel: landlock: Up and running. Sep 9 22:10:15.010698 kernel: SELinux: Initializing. Sep 9 22:10:15.010891 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 22:10:15.010904 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 22:10:15.010915 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 22:10:15.010925 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 22:10:15.010936 kernel: ... version: 0 Sep 9 22:10:15.010952 kernel: ... bit width: 48 Sep 9 22:10:15.010963 kernel: ... generic registers: 6 Sep 9 22:10:15.010974 kernel: ... value mask: 0000ffffffffffff Sep 9 22:10:15.010985 kernel: ... max period: 00007fffffffffff Sep 9 22:10:15.010996 kernel: ... fixed-purpose events: 0 Sep 9 22:10:15.011008 kernel: ... event mask: 000000000000003f Sep 9 22:10:15.011018 kernel: signal: max sigframe size: 1776 Sep 9 22:10:15.011029 kernel: rcu: Hierarchical SRCU implementation. Sep 9 22:10:15.011040 kernel: rcu: Max phase no-delay instances is 400. Sep 9 22:10:15.011050 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 22:10:15.011065 kernel: smp: Bringing up secondary CPUs ... Sep 9 22:10:15.011077 kernel: smpboot: x86: Booting SMP configuration: Sep 9 22:10:15.011087 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 22:10:15.011098 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 22:10:15.011118 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 22:10:15.011130 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54092K init, 2876K bss, 136908K reserved, 0K cma-reserved) Sep 9 22:10:15.011140 kernel: devtmpfs: initialized Sep 9 22:10:15.011149 kernel: x86/mm: Memory block size: 128MB Sep 9 22:10:15.011168 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 22:10:15.011184 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 22:10:15.011194 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 22:10:15.011205 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 22:10:15.011215 kernel: audit: initializing netlink subsys (disabled) Sep 9 22:10:15.011229 kernel: audit: type=2000 audit(1757455809.908:1): state=initialized audit_enabled=0 res=1 Sep 9 22:10:15.011239 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 22:10:15.011250 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 22:10:15.011260 kernel: cpuidle: using governor menu Sep 9 22:10:15.011271 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 22:10:15.011284 kernel: dca service started, version 1.12.1 Sep 9 22:10:15.011295 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 9 22:10:15.011305 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 22:10:15.011313 kernel: PCI: Using configuration type 1 for base access Sep 9 22:10:15.011321 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 22:10:15.011329 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 22:10:15.011337 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 22:10:15.011345 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 22:10:15.011355 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 22:10:15.011363 kernel: ACPI: Added _OSI(Module Device) Sep 9 22:10:15.011371 kernel: ACPI: Added _OSI(Processor Device) Sep 9 22:10:15.011379 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 22:10:15.011387 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 22:10:15.011395 kernel: ACPI: Interpreter enabled Sep 9 22:10:15.011402 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 22:10:15.011410 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 22:10:15.011418 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 22:10:15.011429 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 22:10:15.011437 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 22:10:15.011445 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 22:10:15.011851 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 22:10:15.012044 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 22:10:15.012214 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 22:10:15.012228 kernel: PCI host bridge to bus 0000:00 Sep 9 22:10:15.012376 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 22:10:15.012938 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 22:10:15.013079 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 22:10:15.013238 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 9 22:10:15.013396 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 22:10:15.013569 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 9 22:10:15.013794 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 22:10:15.014123 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 22:10:15.014341 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 22:10:15.014531 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 9 22:10:15.014794 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 9 22:10:15.014980 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 9 22:10:15.015898 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 22:10:15.016125 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 22:10:15.016340 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 9 22:10:15.016556 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 9 22:10:15.016762 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 9 22:10:15.017013 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 22:10:15.017440 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 9 22:10:15.017738 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 9 22:10:15.018044 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 9 22:10:15.018478 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 22:10:15.018903 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 9 22:10:15.019128 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 9 22:10:15.019306 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 9 22:10:15.019502 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 9 22:10:15.019745 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 22:10:15.019932 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 22:10:15.020142 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 22:10:15.020319 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 9 22:10:15.020543 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 9 22:10:15.021030 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 22:10:15.021238 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 9 22:10:15.021256 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 22:10:15.021274 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 22:10:15.021286 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 22:10:15.021298 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 22:10:15.021317 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 22:10:15.021328 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 22:10:15.021339 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 22:10:15.021350 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 22:10:15.021361 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 22:10:15.021373 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 22:10:15.021388 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 22:10:15.021399 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 22:10:15.021410 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 22:10:15.021421 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 22:10:15.021432 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 22:10:15.021443 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 22:10:15.021453 kernel: iommu: Default domain type: Translated Sep 9 22:10:15.021463 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 22:10:15.021474 kernel: PCI: Using ACPI for IRQ routing Sep 9 22:10:15.021488 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 22:10:15.021499 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 22:10:15.021510 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 9 22:10:15.021683 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 22:10:15.021911 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 22:10:15.022083 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 22:10:15.022101 kernel: vgaarb: loaded Sep 9 22:10:15.022126 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 22:10:15.022144 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 22:10:15.022155 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 22:10:15.022166 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 22:10:15.022177 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 22:10:15.022188 kernel: pnp: PnP ACPI init Sep 9 22:10:15.022437 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 22:10:15.022458 kernel: pnp: PnP ACPI: found 6 devices Sep 9 22:10:15.022470 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 22:10:15.022487 kernel: NET: Registered PF_INET protocol family Sep 9 22:10:15.022499 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 22:10:15.022509 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 22:10:15.022520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 22:10:15.022531 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 22:10:15.022542 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 22:10:15.022553 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 22:10:15.022563 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 22:10:15.022574 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 22:10:15.022588 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 22:10:15.022599 kernel: NET: Registered PF_XDP protocol family Sep 9 22:10:15.022918 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 22:10:15.023194 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 22:10:15.023389 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 22:10:15.023558 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 9 22:10:15.023738 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 22:10:15.023890 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 9 22:10:15.023914 kernel: PCI: CLS 0 bytes, default 64 Sep 9 22:10:15.023925 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 22:10:15.023938 kernel: Initialise system trusted keyrings Sep 9 22:10:15.023950 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 22:10:15.023961 kernel: Key type asymmetric registered Sep 9 22:10:15.023973 kernel: Asymmetric key parser 'x509' registered Sep 9 22:10:15.023984 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 22:10:15.023996 kernel: io scheduler mq-deadline registered Sep 9 22:10:15.024008 kernel: io scheduler kyber registered Sep 9 22:10:15.024020 kernel: io scheduler bfq registered Sep 9 22:10:15.024036 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 22:10:15.024049 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 22:10:15.024061 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 22:10:15.024072 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 22:10:15.024084 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 22:10:15.024096 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 22:10:15.024120 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 22:10:15.024132 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 22:10:15.024143 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 22:10:15.024396 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 22:10:15.024584 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 22:10:15.024769 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T22:10:14 UTC (1757455814) Sep 9 22:10:15.024788 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 22:10:15.024931 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 9 22:10:15.024946 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 22:10:15.024956 kernel: NET: Registered PF_INET6 protocol family Sep 9 22:10:15.024970 kernel: Segment Routing with IPv6 Sep 9 22:10:15.024980 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 22:10:15.024990 kernel: NET: Registered PF_PACKET protocol family Sep 9 22:10:15.025013 kernel: Key type dns_resolver registered Sep 9 22:10:15.025023 kernel: IPI shorthand broadcast: enabled Sep 9 22:10:15.025033 kernel: sched_clock: Marking stable (4519006373, 252792154)->(5100971813, -329173286) Sep 9 22:10:15.025043 kernel: registered taskstats version 1 Sep 9 22:10:15.025054 kernel: Loading compiled-in X.509 certificates Sep 9 22:10:15.025067 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 003b39862f2a560eb5545d7d88a07fc5bdfce075' Sep 9 22:10:15.025084 kernel: Demotion targets for Node 0: null Sep 9 22:10:15.025095 kernel: Key type .fscrypt registered Sep 9 22:10:15.025114 kernel: Key type fscrypt-provisioning registered Sep 9 22:10:15.025125 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 22:10:15.025136 kernel: ima: Allocated hash algorithm: sha1 Sep 9 22:10:15.025147 kernel: ima: No architecture policies found Sep 9 22:10:15.025157 kernel: clk: Disabling unused clocks Sep 9 22:10:15.025205 kernel: Warning: unable to open an initial console. Sep 9 22:10:15.025216 kernel: Freeing unused kernel image (initmem) memory: 54092K Sep 9 22:10:15.025230 kernel: Write protecting the kernel read-only data: 24576k Sep 9 22:10:15.025241 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 9 22:10:15.025251 kernel: Run /init as init process Sep 9 22:10:15.025261 kernel: with arguments: Sep 9 22:10:15.025272 kernel: /init Sep 9 22:10:15.025283 kernel: with environment: Sep 9 22:10:15.025293 kernel: HOME=/ Sep 9 22:10:15.025304 kernel: TERM=linux Sep 9 22:10:15.025322 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 22:10:15.025344 systemd[1]: Successfully made /usr/ read-only. Sep 9 22:10:15.025373 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 22:10:15.025389 systemd[1]: Detected virtualization kvm. Sep 9 22:10:15.025401 systemd[1]: Detected architecture x86-64. Sep 9 22:10:15.025412 systemd[1]: Running in initrd. Sep 9 22:10:15.025427 systemd[1]: No hostname configured, using default hostname. Sep 9 22:10:15.025439 systemd[1]: Hostname set to . Sep 9 22:10:15.025450 systemd[1]: Initializing machine ID from VM UUID. Sep 9 22:10:15.025462 systemd[1]: Queued start job for default target initrd.target. Sep 9 22:10:15.025473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 22:10:15.025485 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 22:10:15.025499 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 22:10:15.025511 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 22:10:15.025528 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 22:10:15.025541 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 22:10:15.025565 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 22:10:15.025592 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 22:10:15.025606 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 22:10:15.025620 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 22:10:15.025633 systemd[1]: Reached target paths.target - Path Units. Sep 9 22:10:15.025651 systemd[1]: Reached target slices.target - Slice Units. Sep 9 22:10:15.025664 systemd[1]: Reached target swap.target - Swaps. Sep 9 22:10:15.025677 systemd[1]: Reached target timers.target - Timer Units. Sep 9 22:10:15.025690 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 22:10:15.025723 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 22:10:15.025741 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 22:10:15.025754 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 22:10:15.025767 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 22:10:15.025781 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 22:10:15.025796 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 22:10:15.025810 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 22:10:15.025823 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 22:10:15.025836 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 22:10:15.025852 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 22:10:15.025868 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 22:10:15.025881 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 22:10:15.025895 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 22:10:15.025908 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 22:10:15.025921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 22:10:15.025934 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 22:10:15.025950 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 22:10:15.025964 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 22:10:15.026020 systemd-journald[220]: Collecting audit messages is disabled. Sep 9 22:10:15.026057 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 22:10:15.026071 systemd-journald[220]: Journal started Sep 9 22:10:15.026118 systemd-journald[220]: Runtime Journal (/run/log/journal/6fc7cc64ae1b454398c0c3ef811b1983) is 6M, max 48.6M, 42.5M free. Sep 9 22:10:15.018682 systemd-modules-load[221]: Inserted module 'overlay' Sep 9 22:10:15.029725 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 22:10:15.033925 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 22:10:15.034991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 22:10:15.045145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 22:10:15.127410 systemd-tmpfiles[237]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 22:10:15.167495 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 22:10:15.167548 kernel: Bridge firewalling registered Sep 9 22:10:15.132301 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 22:10:15.140728 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 9 22:10:15.173395 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 22:10:15.176683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 22:10:15.179979 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 22:10:15.186544 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 22:10:15.191321 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 22:10:15.221147 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 22:10:15.223672 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 22:10:15.239031 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 22:10:15.240554 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 22:10:15.326187 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f0ebd120fc09fb344715b1492c3f1d02e1457be2c9792ea5ffb3fe4b15efa812 Sep 9 22:10:15.343942 systemd-resolved[255]: Positive Trust Anchors: Sep 9 22:10:15.343973 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 22:10:15.344020 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 22:10:15.356540 systemd-resolved[255]: Defaulting to hostname 'linux'. Sep 9 22:10:15.361461 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 22:10:15.363115 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 22:10:15.639787 kernel: SCSI subsystem initialized Sep 9 22:10:15.658972 kernel: Loading iSCSI transport class v2.0-870. Sep 9 22:10:15.695296 kernel: iscsi: registered transport (tcp) Sep 9 22:10:15.762982 kernel: iscsi: registered transport (qla4xxx) Sep 9 22:10:15.763075 kernel: QLogic iSCSI HBA Driver Sep 9 22:10:15.849926 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 22:10:15.892158 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 22:10:15.895150 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 22:10:16.076423 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 22:10:16.091799 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 22:10:16.202793 kernel: raid6: avx2x4 gen() 18786 MB/s Sep 9 22:10:16.220788 kernel: raid6: avx2x2 gen() 23235 MB/s Sep 9 22:10:16.238077 kernel: raid6: avx2x1 gen() 17736 MB/s Sep 9 22:10:16.238179 kernel: raid6: using algorithm avx2x2 gen() 23235 MB/s Sep 9 22:10:16.256363 kernel: raid6: .... xor() 14091 MB/s, rmw enabled Sep 9 22:10:16.256470 kernel: raid6: using avx2x2 recovery algorithm Sep 9 22:10:16.290034 kernel: xor: automatically using best checksumming function avx Sep 9 22:10:16.720398 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 22:10:16.756599 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 22:10:16.769898 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 22:10:16.845253 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 9 22:10:16.856357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 22:10:16.860858 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 22:10:16.902191 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 9 22:10:16.960093 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 22:10:16.965995 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 22:10:17.174471 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 22:10:17.185551 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 22:10:17.291831 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 22:10:17.291900 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 9 22:10:17.310108 kernel: libata version 3.00 loaded. Sep 9 22:10:17.310183 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 22:10:17.315263 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 22:10:17.323393 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 22:10:17.323457 kernel: GPT:9289727 != 19775487 Sep 9 22:10:17.323473 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 22:10:17.324131 kernel: GPT:9289727 != 19775487 Sep 9 22:10:17.324158 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 22:10:17.326493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 22:10:17.326542 kernel: AES CTR mode by8 optimization enabled Sep 9 22:10:17.329152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 22:10:17.329330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 22:10:17.347047 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 22:10:17.350682 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 22:10:17.351481 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 22:10:17.356497 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 22:10:17.357177 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 22:10:17.357372 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 22:10:17.356057 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 22:10:17.362343 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 22:10:17.367729 kernel: scsi host0: ahci Sep 9 22:10:17.369945 kernel: scsi host1: ahci Sep 9 22:10:17.370196 kernel: scsi host2: ahci Sep 9 22:10:17.373724 kernel: scsi host3: ahci Sep 9 22:10:17.377736 kernel: scsi host4: ahci Sep 9 22:10:17.379433 kernel: scsi host5: ahci Sep 9 22:10:17.380050 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 9 22:10:17.388568 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 9 22:10:17.388666 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 9 22:10:17.388717 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 9 22:10:17.390145 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 9 22:10:17.392084 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 9 22:10:17.439654 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 22:10:17.461993 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 22:10:17.474280 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 22:10:17.483784 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 22:10:17.507608 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 22:10:17.510095 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 22:10:17.636486 disk-uuid[631]: Primary Header is updated. Sep 9 22:10:17.636486 disk-uuid[631]: Secondary Entries is updated. Sep 9 22:10:17.636486 disk-uuid[631]: Secondary Header is updated. Sep 9 22:10:17.682987 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 22:10:17.683015 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 22:10:17.674125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 22:10:17.703105 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 22:10:17.703177 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 22:10:17.708115 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 22:10:17.719443 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 22:10:17.719518 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 22:10:17.723161 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 22:10:17.723218 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 22:10:17.723234 kernel: ata3.00: applying bridge limits Sep 9 22:10:17.723247 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 22:10:17.733562 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 22:10:17.733630 kernel: ata3.00: configured for UDMA/100 Sep 9 22:10:17.738167 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 22:10:17.909191 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 22:10:17.909569 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 22:10:17.936168 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 22:10:18.479442 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 22:10:18.484584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 22:10:18.487523 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 22:10:18.489216 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 22:10:18.510192 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 22:10:18.586261 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 22:10:18.669073 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 22:10:18.675303 disk-uuid[632]: The operation has completed successfully. Sep 9 22:10:18.782842 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 22:10:18.783034 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 22:10:18.829893 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 22:10:18.873580 sh[663]: Success Sep 9 22:10:18.905796 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 22:10:18.905887 kernel: device-mapper: uevent: version 1.0.3 Sep 9 22:10:18.908774 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 22:10:18.977523 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 22:10:19.073855 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 22:10:19.089204 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 22:10:19.131570 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 22:10:19.143132 kernel: BTRFS: device fsid f72d0a81-8b28-47a3-b3ab-bf6ecd8938f0 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (675) Sep 9 22:10:19.149613 kernel: BTRFS info (device dm-0): first mount of filesystem f72d0a81-8b28-47a3-b3ab-bf6ecd8938f0 Sep 9 22:10:19.149692 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 22:10:19.174205 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 22:10:19.174296 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 22:10:19.177803 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 22:10:19.185964 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 22:10:19.194537 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 22:10:19.202038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 22:10:19.241231 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 22:10:19.329756 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (705) Sep 9 22:10:19.335146 kernel: BTRFS info (device vda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 22:10:19.335261 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 22:10:19.365910 kernel: BTRFS info (device vda6): turning on async discard Sep 9 22:10:19.366024 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 22:10:19.399694 kernel: BTRFS info (device vda6): last unmount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 22:10:19.420381 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 22:10:19.436621 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 22:10:19.639069 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 22:10:19.651402 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 22:10:19.673576 ignition[761]: Ignition 2.22.0 Sep 9 22:10:19.673588 ignition[761]: Stage: fetch-offline Sep 9 22:10:19.673630 ignition[761]: no configs at "/usr/lib/ignition/base.d" Sep 9 22:10:19.673641 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 22:10:19.673766 ignition[761]: parsed url from cmdline: "" Sep 9 22:10:19.673771 ignition[761]: no config URL provided Sep 9 22:10:19.673777 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 22:10:19.673787 ignition[761]: no config at "/usr/lib/ignition/user.ign" Sep 9 22:10:19.673815 ignition[761]: op(1): [started] loading QEMU firmware config module Sep 9 22:10:19.673821 ignition[761]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 22:10:19.705574 ignition[761]: op(1): [finished] loading QEMU firmware config module Sep 9 22:10:19.767655 systemd-networkd[850]: lo: Link UP Sep 9 22:10:19.768608 systemd-networkd[850]: lo: Gained carrier Sep 9 22:10:19.774368 systemd-networkd[850]: Enumeration completed Sep 9 22:10:19.775549 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 22:10:19.775556 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 22:10:19.776845 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 22:10:19.778734 systemd-networkd[850]: eth0: Link UP Sep 9 22:10:19.780119 systemd[1]: Reached target network.target - Network. Sep 9 22:10:19.836162 systemd-networkd[850]: eth0: Gained carrier Sep 9 22:10:19.836185 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 22:10:19.858688 ignition[761]: parsing config with SHA512: b96841eca305e1c8fb21e66b79005212d479a369feaa3af12624aa4bc9844507941a077da2e8e72701d58b305bcaabef907f63d8e717c0df611780a0f9b6cdc9 Sep 9 22:10:19.859850 systemd-networkd[850]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 22:10:19.874109 unknown[761]: fetched base config from "system" Sep 9 22:10:19.874127 unknown[761]: fetched user config from "qemu" Sep 9 22:10:19.875191 ignition[761]: fetch-offline: fetch-offline passed Sep 9 22:10:19.884286 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 22:10:19.875295 ignition[761]: Ignition finished successfully Sep 9 22:10:19.888508 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 22:10:19.918625 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 22:10:19.970570 ignition[858]: Ignition 2.22.0 Sep 9 22:10:19.970594 ignition[858]: Stage: kargs Sep 9 22:10:19.970842 ignition[858]: no configs at "/usr/lib/ignition/base.d" Sep 9 22:10:19.970858 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 22:10:19.973307 ignition[858]: kargs: kargs passed Sep 9 22:10:19.973375 ignition[858]: Ignition finished successfully Sep 9 22:10:19.981380 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 22:10:19.992301 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 22:10:20.058882 ignition[866]: Ignition 2.22.0 Sep 9 22:10:20.058906 ignition[866]: Stage: disks Sep 9 22:10:20.059114 ignition[866]: no configs at "/usr/lib/ignition/base.d" Sep 9 22:10:20.059130 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 22:10:20.060271 ignition[866]: disks: disks passed Sep 9 22:10:20.109796 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 22:10:20.060347 ignition[866]: Ignition finished successfully Sep 9 22:10:20.112128 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 22:10:20.115220 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 22:10:20.119356 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 22:10:20.124237 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 22:10:20.124353 systemd[1]: Reached target basic.target - Basic System. Sep 9 22:10:20.141620 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 22:10:20.195744 systemd-fsck[876]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 22:10:20.271528 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 22:10:20.285067 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 22:10:20.584353 kernel: EXT4-fs (vda9): mounted filesystem b54acc07-9600-49db-baed-d5fd6f41a1a5 r/w with ordered data mode. Quota mode: none. Sep 9 22:10:20.584995 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 22:10:20.589277 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 22:10:20.596014 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 22:10:20.624239 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 22:10:20.627217 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 22:10:20.627291 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 22:10:20.627327 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 22:10:20.658863 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 22:10:20.666045 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Sep 9 22:10:20.666093 kernel: BTRFS info (device vda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 22:10:20.666107 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 22:10:20.667891 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 22:10:20.702828 kernel: BTRFS info (device vda6): turning on async discard Sep 9 22:10:20.702913 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 22:10:20.706695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 22:10:20.798510 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 22:10:20.829399 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory Sep 9 22:10:20.850460 initrd-setup-root[922]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 22:10:20.877040 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 22:10:21.121823 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 22:10:21.124642 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 22:10:21.132959 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 22:10:21.153781 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 22:10:21.157361 kernel: BTRFS info (device vda6): last unmount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 22:10:21.209585 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 22:10:21.307100 ignition[998]: INFO : Ignition 2.22.0 Sep 9 22:10:21.307100 ignition[998]: INFO : Stage: mount Sep 9 22:10:21.309555 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 22:10:21.309555 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 22:10:21.309555 ignition[998]: INFO : mount: mount passed Sep 9 22:10:21.309555 ignition[998]: INFO : Ignition finished successfully Sep 9 22:10:21.316069 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 22:10:21.320992 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 22:10:21.461391 systemd-networkd[850]: eth0: Gained IPv6LL Sep 9 22:10:21.599978 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 22:10:21.653298 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1011) Sep 9 22:10:21.655887 kernel: BTRFS info (device vda6): first mount of filesystem 0420e4c2-e4f2-4134-b76b-6a7c4e652ed7 Sep 9 22:10:21.655938 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 22:10:21.667962 kernel: BTRFS info (device vda6): turning on async discard Sep 9 22:10:21.668053 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 22:10:21.676640 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 22:10:21.758566 ignition[1028]: INFO : Ignition 2.22.0 Sep 9 22:10:21.758566 ignition[1028]: INFO : Stage: files Sep 9 22:10:21.760938 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 22:10:21.760938 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 22:10:21.760938 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Sep 9 22:10:21.766093 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 22:10:21.766093 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 22:10:21.771835 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 22:10:21.774087 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 22:10:21.776182 unknown[1028]: wrote ssh authorized keys file for user: core Sep 9 22:10:21.777869 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 22:10:21.781544 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 22:10:21.785949 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 22:10:21.871673 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 22:10:22.350278 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 22:10:22.352924 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 22:10:22.352924 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 22:10:22.564802 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 22:10:22.699419 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 22:10:22.699419 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 22:10:22.705648 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 22:10:22.705648 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 22:10:22.705648 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 22:10:22.705648 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 22:10:22.705648 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 22:10:22.705648 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 22:10:22.705648 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 22:10:22.729592 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 22:10:22.729592 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 22:10:22.729592 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 22:10:22.746409 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 22:10:22.746409 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 22:10:22.746409 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 22:10:23.170060 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 22:10:24.588666 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 22:10:24.588666 ignition[1028]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 22:10:24.605813 ignition[1028]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 22:10:24.906909 ignition[1028]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 22:10:24.906909 ignition[1028]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 22:10:24.906909 ignition[1028]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 22:10:24.919667 ignition[1028]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 22:10:24.919667 ignition[1028]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 22:10:24.919667 ignition[1028]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 22:10:24.919667 ignition[1028]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 22:10:25.003730 ignition[1028]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 22:10:25.018987 ignition[1028]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 22:10:25.018987 ignition[1028]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 22:10:25.018987 ignition[1028]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 22:10:25.018987 ignition[1028]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 22:10:25.029742 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 22:10:25.032058 ignition[1028]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 22:10:25.034124 ignition[1028]: INFO : files: files passed Sep 9 22:10:25.035066 ignition[1028]: INFO : Ignition finished successfully Sep 9 22:10:25.042442 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 22:10:25.047441 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 22:10:25.067188 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 22:10:25.084619 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 22:10:25.084883 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 22:10:25.093324 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 22:10:25.098017 initrd-setup-root-after-ignition[1059]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 22:10:25.105919 initrd-setup-root-after-ignition[1059]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 22:10:25.110805 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 22:10:25.106724 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 22:10:25.111072 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 22:10:25.115568 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 22:10:25.218903 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 22:10:25.219885 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 22:10:25.230166 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 22:10:25.232346 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 22:10:25.232464 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 22:10:25.235081 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 22:10:25.296008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 22:10:25.305105 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 22:10:25.377193 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 22:10:25.384282 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 22:10:25.396916 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 22:10:25.406213 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 22:10:25.406487 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 22:10:25.430345 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 22:10:25.431741 systemd[1]: Stopped target basic.target - Basic System. Sep 9 22:10:25.434039 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 22:10:25.438167 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 22:10:25.443529 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 22:10:25.447463 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 22:10:25.451326 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 22:10:25.455628 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 22:10:25.458457 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 22:10:25.462607 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 22:10:25.466465 systemd[1]: Stopped target swap.target - Swaps. Sep 9 22:10:25.473077 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 22:10:25.473304 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 22:10:25.484783 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 22:10:25.486980 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 22:10:25.490190 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 22:10:25.490693 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 22:10:25.494162 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 22:10:25.494379 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 22:10:25.499390 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 22:10:25.499587 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 22:10:25.501360 systemd[1]: Stopped target paths.target - Path Units. Sep 9 22:10:25.505034 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 22:10:25.507621 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 22:10:25.539509 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 22:10:25.540879 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 22:10:25.545929 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 22:10:25.546083 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 22:10:25.552220 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 22:10:25.552355 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 22:10:25.554537 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 22:10:25.554841 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 22:10:25.563001 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 22:10:25.563181 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 22:10:25.570040 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 22:10:25.577805 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 22:10:25.582825 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 22:10:25.583090 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 22:10:25.591470 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 22:10:25.591655 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 22:10:25.606021 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 22:10:25.606177 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 22:10:25.636692 ignition[1083]: INFO : Ignition 2.22.0 Sep 9 22:10:25.636692 ignition[1083]: INFO : Stage: umount Sep 9 22:10:25.639012 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 22:10:25.639012 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 22:10:25.644485 ignition[1083]: INFO : umount: umount passed Sep 9 22:10:25.644485 ignition[1083]: INFO : Ignition finished successfully Sep 9 22:10:25.650178 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 22:10:25.650392 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 22:10:25.675483 systemd[1]: Stopped target network.target - Network. Sep 9 22:10:25.679771 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 22:10:25.679923 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 22:10:25.683527 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 22:10:25.683627 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 22:10:25.693887 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 22:10:25.694016 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 22:10:25.694129 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 22:10:25.694200 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 22:10:25.694503 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 22:10:25.694677 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 22:10:25.698428 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 22:10:25.702044 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 22:10:25.702240 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 22:10:25.706357 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 22:10:25.706489 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 22:10:25.717475 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 22:10:25.717699 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 22:10:25.738395 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 22:10:25.738986 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 22:10:25.739076 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 22:10:25.815071 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 22:10:25.820891 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 22:10:25.821118 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 22:10:25.826941 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 22:10:25.827783 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 22:10:25.833209 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 22:10:25.833327 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 22:10:25.839636 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 22:10:25.841002 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 22:10:25.841102 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 22:10:25.842854 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 22:10:25.842977 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 22:10:25.847458 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 22:10:25.847877 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 22:10:25.903392 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 22:10:25.922804 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 22:10:25.974619 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 22:10:25.997773 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 22:10:26.011564 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 22:10:26.011763 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 22:10:26.028422 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 22:10:26.029978 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 22:10:26.038593 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 22:10:26.039278 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 22:10:26.044648 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 22:10:26.044771 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 22:10:26.049275 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 22:10:26.049377 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 22:10:26.062664 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 22:10:26.062869 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 22:10:26.073662 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 22:10:26.084148 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 22:10:26.084292 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 22:10:26.102751 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 22:10:26.102877 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 22:10:26.109796 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 22:10:26.109915 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 22:10:26.114183 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 22:10:26.114902 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 22:10:26.122104 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 22:10:26.122230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 22:10:26.137165 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 22:10:26.137329 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 22:10:26.156648 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 22:10:26.162325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 22:10:26.205658 systemd[1]: Switching root. Sep 9 22:10:26.274397 systemd-journald[220]: Journal stopped Sep 9 22:10:29.970526 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 9 22:10:29.970609 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 22:10:29.970632 kernel: SELinux: policy capability open_perms=1 Sep 9 22:10:29.970654 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 22:10:29.970688 kernel: SELinux: policy capability always_check_network=0 Sep 9 22:10:29.970720 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 22:10:29.970736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 22:10:29.970750 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 22:10:29.970765 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 22:10:29.970779 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 22:10:29.970795 kernel: audit: type=1403 audit(1757455827.469:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 22:10:29.970816 systemd[1]: Successfully loaded SELinux policy in 150.116ms. Sep 9 22:10:29.970847 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.427ms. Sep 9 22:10:29.970864 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 22:10:29.970880 systemd[1]: Detected virtualization kvm. Sep 9 22:10:29.970897 systemd[1]: Detected architecture x86-64. Sep 9 22:10:29.970913 systemd[1]: Detected first boot. Sep 9 22:10:29.970929 systemd[1]: Initializing machine ID from VM UUID. Sep 9 22:10:29.970945 zram_generator::config[1128]: No configuration found. Sep 9 22:10:29.970973 kernel: Guest personality initialized and is inactive Sep 9 22:10:29.970989 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 22:10:29.971004 kernel: Initialized host personality Sep 9 22:10:29.971018 kernel: NET: Registered PF_VSOCK protocol family Sep 9 22:10:29.971032 systemd[1]: Populated /etc with preset unit settings. Sep 9 22:10:29.971050 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 22:10:29.971066 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 22:10:29.971081 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 22:10:29.971098 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 22:10:29.971117 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 22:10:29.971133 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 22:10:29.971149 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 22:10:29.971166 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 22:10:29.971184 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 22:10:29.971202 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 22:10:29.971221 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 22:10:29.971243 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 22:10:29.971265 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 22:10:29.971283 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 22:10:29.971300 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 22:10:29.971318 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 22:10:29.971343 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 22:10:29.971362 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 22:10:29.971380 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 22:10:29.971404 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 22:10:29.971425 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 22:10:29.971443 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 22:10:29.971461 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 22:10:29.971478 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 22:10:29.971496 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 22:10:29.971513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 22:10:29.971531 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 22:10:29.971548 systemd[1]: Reached target slices.target - Slice Units. Sep 9 22:10:29.971565 systemd[1]: Reached target swap.target - Swaps. Sep 9 22:10:29.971587 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 22:10:29.971605 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 22:10:29.971623 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 22:10:29.971643 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 22:10:29.971661 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 22:10:29.971690 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 22:10:29.971725 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 22:10:29.971743 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 22:10:29.971761 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 22:10:29.971783 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 22:10:29.971802 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 22:10:29.971819 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 22:10:29.971837 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 22:10:29.971854 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 22:10:29.971872 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 22:10:29.971890 systemd[1]: Reached target machines.target - Containers. Sep 9 22:10:29.971907 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 22:10:29.971925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 22:10:29.971947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 22:10:29.971964 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 22:10:29.971982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 22:10:29.971999 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 22:10:29.972017 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 22:10:29.972036 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 22:10:29.972056 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 22:10:29.972074 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 22:10:29.972096 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 22:10:29.972113 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 22:10:29.972130 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 22:10:29.972147 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 22:10:29.972165 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 22:10:29.972182 kernel: fuse: init (API version 7.41) Sep 9 22:10:29.972200 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 22:10:29.972217 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 22:10:29.972234 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 22:10:29.972255 kernel: loop: module loaded Sep 9 22:10:29.972272 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 22:10:29.972290 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 22:10:29.972307 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 22:10:29.972325 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 22:10:29.972345 systemd[1]: Stopped verity-setup.service. Sep 9 22:10:29.972363 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 22:10:29.972381 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 22:10:29.972400 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 22:10:29.972417 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 22:10:29.972439 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 22:10:29.972462 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 22:10:29.972479 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 22:10:29.972497 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 22:10:29.972515 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 22:10:29.972532 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 22:10:29.972550 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 22:10:29.972568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 22:10:29.972586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 22:10:29.972607 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 22:10:29.972626 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 22:10:29.972643 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 22:10:29.972660 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 22:10:29.972689 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 22:10:29.972725 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 22:10:29.972744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 22:10:29.972762 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 22:10:29.972818 systemd-journald[1203]: Collecting audit messages is disabled. Sep 9 22:10:29.972859 systemd-journald[1203]: Journal started Sep 9 22:10:29.972892 systemd-journald[1203]: Runtime Journal (/run/log/journal/6fc7cc64ae1b454398c0c3ef811b1983) is 6M, max 48.6M, 42.5M free. Sep 9 22:10:29.130488 systemd[1]: Queued start job for default target multi-user.target. Sep 9 22:10:29.157184 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 22:10:29.159162 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 22:10:29.984724 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 22:10:29.989752 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 22:10:30.048604 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 22:10:30.050905 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 22:10:30.082505 kernel: ACPI: bus type drm_connector registered Sep 9 22:10:30.085656 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 22:10:30.094966 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 22:10:30.103430 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 22:10:30.107941 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 22:10:30.108014 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 22:10:30.112519 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 22:10:30.125750 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 22:10:30.136550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 22:10:30.142919 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 22:10:30.149973 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 22:10:30.158986 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 22:10:30.167017 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 22:10:30.175919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 22:10:30.179526 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 22:10:30.193482 systemd-journald[1203]: Time spent on flushing to /var/log/journal/6fc7cc64ae1b454398c0c3ef811b1983 is 43.341ms for 983 entries. Sep 9 22:10:30.193482 systemd-journald[1203]: System Journal (/var/log/journal/6fc7cc64ae1b454398c0c3ef811b1983) is 8M, max 195.6M, 187.6M free. Sep 9 22:10:30.259087 systemd-journald[1203]: Received client request to flush runtime journal. Sep 9 22:10:30.259196 kernel: loop0: detected capacity change from 0 to 221472 Sep 9 22:10:30.195382 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 22:10:30.212992 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 22:10:30.231344 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 22:10:30.233149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 22:10:30.238925 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 22:10:30.243051 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 22:10:30.249536 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 22:10:30.265797 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 22:10:30.268083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 22:10:30.276922 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 22:10:30.284505 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 22:10:30.295798 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 9 22:10:30.295827 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 9 22:10:30.310486 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 22:10:30.323344 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 22:10:30.385836 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 22:10:30.386732 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 22:10:30.389309 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 22:10:30.447762 kernel: loop1: detected capacity change from 0 to 110984 Sep 9 22:10:30.462151 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 22:10:30.470339 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 22:10:30.516490 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Sep 9 22:10:30.516852 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Sep 9 22:10:30.524912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 22:10:30.548224 kernel: loop2: detected capacity change from 0 to 128016 Sep 9 22:10:30.659324 kernel: loop3: detected capacity change from 0 to 221472 Sep 9 22:10:30.720812 kernel: loop4: detected capacity change from 0 to 110984 Sep 9 22:10:30.781858 kernel: loop5: detected capacity change from 0 to 128016 Sep 9 22:10:30.865581 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 22:10:30.866397 (sd-merge)[1273]: Merged extensions into '/usr'. Sep 9 22:10:30.884606 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 22:10:30.884636 systemd[1]: Reloading... Sep 9 22:10:31.000746 zram_generator::config[1298]: No configuration found. Sep 9 22:10:31.294553 systemd[1]: Reloading finished in 409 ms. Sep 9 22:10:31.535456 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 22:10:31.559753 systemd[1]: Starting ensure-sysext.service... Sep 9 22:10:31.562936 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 22:10:31.701186 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Sep 9 22:10:31.701216 systemd[1]: Reloading... Sep 9 22:10:31.743734 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 22:10:31.743883 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 22:10:31.744402 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 22:10:31.744920 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 22:10:31.746332 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 22:10:31.746920 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 9 22:10:31.747160 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 9 22:10:31.755351 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 22:10:31.755574 systemd-tmpfiles[1336]: Skipping /boot Sep 9 22:10:31.763909 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 22:10:31.779369 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 22:10:31.779536 systemd-tmpfiles[1336]: Skipping /boot Sep 9 22:10:31.786756 zram_generator::config[1363]: No configuration found. Sep 9 22:10:32.068461 systemd[1]: Reloading finished in 366 ms. Sep 9 22:10:32.094483 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 22:10:32.100220 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 22:10:32.141155 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 22:10:32.164245 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 22:10:32.194190 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 22:10:32.203043 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 22:10:32.221733 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 22:10:32.226839 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 22:10:32.233816 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 22:10:32.243655 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 22:10:32.243958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 22:10:32.248480 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 22:10:32.252926 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 22:10:32.267271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 22:10:32.269469 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 22:10:32.269628 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 22:10:32.275051 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 22:10:32.277310 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 22:10:32.286476 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 22:10:32.291153 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 22:10:32.300169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 22:10:32.302734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 22:10:32.304286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 22:10:32.304558 augenrules[1433]: No rules Sep 9 22:10:32.306666 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 22:10:32.307069 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 22:10:32.309144 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 22:10:32.309452 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 22:10:32.310777 systemd-udevd[1416]: Using default interface naming scheme 'v255'. Sep 9 22:10:32.327123 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 22:10:32.332811 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 22:10:32.335931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 22:10:32.339174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 22:10:32.351310 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 22:10:32.355281 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 22:10:32.359583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 22:10:32.361351 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 22:10:32.361592 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 22:10:32.365219 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 22:10:32.366641 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 22:10:32.379057 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 22:10:32.382015 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 22:10:32.383151 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 22:10:32.389742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 22:10:32.390117 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 22:10:32.394632 systemd[1]: Finished ensure-sysext.service. Sep 9 22:10:32.396985 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 22:10:32.397336 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 22:10:32.406797 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 22:10:32.407237 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 22:10:32.411068 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 22:10:32.416026 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 22:10:32.427016 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 22:10:32.428383 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 22:10:32.428493 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 22:10:32.432906 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 22:10:32.434951 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 22:10:32.446527 augenrules[1443]: /sbin/augenrules: No change Sep 9 22:10:32.478216 augenrules[1500]: No rules Sep 9 22:10:32.481012 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 22:10:32.483386 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 22:10:32.561603 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 22:10:32.565421 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 22:10:32.739164 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 22:10:32.773949 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 22:10:32.782961 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 22:10:32.824765 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 22:10:32.830928 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 22:10:32.849798 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 22:10:32.857400 systemd-networkd[1474]: lo: Link UP Sep 9 22:10:32.857424 systemd-networkd[1474]: lo: Gained carrier Sep 9 22:10:32.860688 systemd-networkd[1474]: Enumeration completed Sep 9 22:10:32.860908 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 22:10:32.861620 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 22:10:32.861643 systemd-networkd[1474]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 22:10:32.862146 systemd-resolved[1413]: Positive Trust Anchors: Sep 9 22:10:32.862158 systemd-resolved[1413]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 22:10:32.862187 systemd-resolved[1413]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 22:10:32.862932 systemd-networkd[1474]: eth0: Link UP Sep 9 22:10:32.863195 systemd-networkd[1474]: eth0: Gained carrier Sep 9 22:10:32.863214 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 22:10:32.867075 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 22:10:32.868512 systemd-resolved[1413]: Defaulting to hostname 'linux'. Sep 9 22:10:32.870927 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 22:10:32.872629 kernel: ACPI: button: Power Button [PWRF] Sep 9 22:10:32.873046 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 22:10:32.875068 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 22:10:32.876864 systemd[1]: Reached target network.target - Network. Sep 9 22:10:32.878545 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 22:10:32.880214 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 22:10:32.880824 systemd-networkd[1474]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 22:10:32.881686 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 22:10:32.882514 systemd-timesyncd[1477]: Network configuration changed, trying to establish connection. Sep 9 22:10:32.883403 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 22:10:32.884307 systemd-timesyncd[1477]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 22:10:32.884396 systemd-timesyncd[1477]: Initial clock synchronization to Tue 2025-09-09 22:10:33.227490 UTC. Sep 9 22:10:32.885213 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 22:10:32.887443 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 22:10:32.889327 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 22:10:32.889390 systemd[1]: Reached target paths.target - Path Units. Sep 9 22:10:32.890637 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 22:10:32.892751 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 22:10:32.894622 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 22:10:32.896131 systemd[1]: Reached target timers.target - Timer Units. Sep 9 22:10:32.899463 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 22:10:32.905127 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 22:10:32.907111 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 22:10:32.905478 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 22:10:32.911739 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 22:10:32.913763 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 22:10:32.915532 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 22:10:32.938911 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 22:10:32.942482 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 22:10:32.946821 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 22:10:32.948753 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 22:10:32.961226 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 22:10:32.963240 systemd[1]: Reached target basic.target - Basic System. Sep 9 22:10:32.964490 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 22:10:32.964534 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 22:10:32.969017 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 22:10:32.973038 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 22:10:32.975945 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 22:10:32.979386 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 22:10:32.990807 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 22:10:32.992435 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 22:10:32.995888 jq[1551]: false Sep 9 22:10:32.997013 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 22:10:33.001050 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 22:10:33.009701 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 22:10:33.021543 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 22:10:33.025827 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 22:10:33.035540 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 22:10:33.036319 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Refreshing passwd entry cache Sep 9 22:10:33.036362 oslogin_cache_refresh[1553]: Refreshing passwd entry cache Sep 9 22:10:33.038304 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 22:10:33.047093 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 22:10:33.048594 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Failure getting users, quitting Sep 9 22:10:33.048583 oslogin_cache_refresh[1553]: Failure getting users, quitting Sep 9 22:10:33.048708 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 22:10:33.048708 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Refreshing group entry cache Sep 9 22:10:33.048616 oslogin_cache_refresh[1553]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 22:10:33.048715 oslogin_cache_refresh[1553]: Refreshing group entry cache Sep 9 22:10:33.049264 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 22:10:33.053003 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 22:10:33.061889 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Failure getting groups, quitting Sep 9 22:10:33.061889 google_oslogin_nss_cache[1553]: oslogin_cache_refresh[1553]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 22:10:33.061878 oslogin_cache_refresh[1553]: Failure getting groups, quitting Sep 9 22:10:33.061903 oslogin_cache_refresh[1553]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 22:10:33.062655 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 22:10:33.065063 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 22:10:33.074842 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 22:10:33.075590 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 22:10:33.076037 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 22:10:33.078037 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 22:10:33.078469 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 22:10:33.106201 update_engine[1565]: I20250909 22:10:33.106067 1565 main.cc:92] Flatcar Update Engine starting Sep 9 22:10:33.246024 extend-filesystems[1552]: Found /dev/vda6 Sep 9 22:10:33.258866 jq[1570]: true Sep 9 22:10:33.265934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 22:10:33.270112 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 22:10:33.270463 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 22:10:33.370589 extend-filesystems[1552]: Found /dev/vda9 Sep 9 22:10:33.369041 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 22:10:33.328596 dbus-daemon[1549]: [system] SELinux support is enabled Sep 9 22:10:33.375345 extend-filesystems[1552]: Checking size of /dev/vda9 Sep 9 22:10:33.382838 update_engine[1565]: I20250909 22:10:33.382097 1565 update_check_scheduler.cc:74] Next update check in 6m11s Sep 9 22:10:33.382591 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 22:10:33.383407 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 22:10:33.386360 tar[1574]: linux-amd64/helm Sep 9 22:10:33.385324 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 22:10:33.385497 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 22:10:33.387506 systemd[1]: Started update-engine.service - Update Engine. Sep 9 22:10:33.395513 (ntainerd)[1584]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 22:10:33.399837 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 22:10:33.406232 jq[1583]: true Sep 9 22:10:33.429873 extend-filesystems[1552]: Resized partition /dev/vda9 Sep 9 22:10:33.475052 sshd_keygen[1573]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 22:10:33.488237 extend-filesystems[1610]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 22:10:33.554628 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 22:10:33.609198 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 22:10:33.622855 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 22:10:33.646198 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 22:10:33.713529 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:54080.service - OpenSSH per-connection server daemon (10.0.0.1:54080). Sep 9 22:10:33.719525 systemd-logind[1563]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 22:10:33.719548 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 22:10:33.723364 systemd-logind[1563]: New seat seat0. Sep 9 22:10:33.759168 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 22:10:33.759639 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 22:10:33.759995 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 22:10:33.815494 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 22:10:33.856450 kernel: kvm_amd: TSC scaling supported Sep 9 22:10:33.856564 kernel: kvm_amd: Nested Virtualization enabled Sep 9 22:10:33.856581 kernel: kvm_amd: Nested Paging enabled Sep 9 22:10:33.856596 kernel: kvm_amd: LBR virtualization supported Sep 9 22:10:33.856611 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 22:10:33.859329 kernel: kvm_amd: Virtual GIF supported Sep 9 22:10:33.951814 locksmithd[1593]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 22:10:33.958446 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 22:10:33.962732 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 22:10:33.967089 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 22:10:33.981222 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 22:10:34.144853 kernel: EDAC MC: Ver: 3.0.0 Sep 9 22:10:34.355580 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 22:10:34.363782 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 22:10:34.807097 extend-filesystems[1610]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 22:10:34.807097 extend-filesystems[1610]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 22:10:34.807097 extend-filesystems[1610]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 22:10:34.956417 extend-filesystems[1552]: Resized filesystem in /dev/vda9 Sep 9 22:10:34.813642 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 22:10:34.959485 bash[1617]: Updated "/home/core/.ssh/authorized_keys" Sep 9 22:10:34.814106 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 22:10:34.902128 systemd-networkd[1474]: eth0: Gained IPv6LL Sep 9 22:10:34.962100 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 22:10:34.970091 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 22:10:34.988752 tar[1574]: linux-amd64/LICENSE Sep 9 22:10:34.988752 tar[1574]: linux-amd64/README.md Sep 9 22:10:35.053145 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 22:10:35.265543 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 22:10:35.342624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 22:10:35.349788 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 22:10:35.351719 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 22:10:35.356579 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 22:10:35.386326 containerd[1584]: time="2025-09-09T22:10:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 22:10:35.387825 containerd[1584]: time="2025-09-09T22:10:35.387111036Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 22:10:35.438357 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 22:10:35.438799 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 22:10:35.452607 containerd[1584]: time="2025-09-09T22:10:35.450694050Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.724µs" Sep 9 22:10:35.452607 containerd[1584]: time="2025-09-09T22:10:35.451968897Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 22:10:35.453622 containerd[1584]: time="2025-09-09T22:10:35.453347368Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 22:10:35.454378 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.455537988Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.455587120Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.455639874Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.455771531Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.455789640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.456246874Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.456281965Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.456302794Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.456315533Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.456471300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 22:10:35.459165 containerd[1584]: time="2025-09-09T22:10:35.456872077Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 22:10:35.460697 containerd[1584]: time="2025-09-09T22:10:35.456918695Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 22:10:35.460697 containerd[1584]: time="2025-09-09T22:10:35.456932592Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 22:10:35.460697 containerd[1584]: time="2025-09-09T22:10:35.456994451Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 22:10:35.463727 containerd[1584]: time="2025-09-09T22:10:35.462239466Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 22:10:35.463727 containerd[1584]: time="2025-09-09T22:10:35.462426556Z" level=info msg="metadata content store policy set" policy=shared Sep 9 22:10:35.481661 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 22:10:35.618257 sshd[1626]: Connection closed by authenticating user core 10.0.0.1 port 54080 [preauth] Sep 9 22:10:35.620971 systemd[1]: sshd@0-10.0.0.117:22-10.0.0.1:54080.service: Deactivated successfully. Sep 9 22:10:35.655866 containerd[1584]: time="2025-09-09T22:10:35.655714065Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 22:10:35.656062 containerd[1584]: time="2025-09-09T22:10:35.655889173Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 22:10:35.656062 containerd[1584]: time="2025-09-09T22:10:35.655913884Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 22:10:35.656062 containerd[1584]: time="2025-09-09T22:10:35.655929158Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 22:10:35.656062 containerd[1584]: time="2025-09-09T22:10:35.655953589Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 22:10:35.656062 containerd[1584]: time="2025-09-09T22:10:35.655967455Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 22:10:35.656062 containerd[1584]: time="2025-09-09T22:10:35.655989962Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 22:10:35.656062 containerd[1584]: time="2025-09-09T22:10:35.656006012Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 22:10:35.656062 containerd[1584]: time="2025-09-09T22:10:35.656026605Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 22:10:35.656062 containerd[1584]: time="2025-09-09T22:10:35.656039084Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 22:10:35.656062 containerd[1584]: time="2025-09-09T22:10:35.656050032Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 22:10:35.656323 containerd[1584]: time="2025-09-09T22:10:35.656069848Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 22:10:35.656384 containerd[1584]: time="2025-09-09T22:10:35.656359228Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 22:10:35.656418 containerd[1584]: time="2025-09-09T22:10:35.656404842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 22:10:35.656458 containerd[1584]: time="2025-09-09T22:10:35.656426139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 22:10:35.656458 containerd[1584]: time="2025-09-09T22:10:35.656442064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 22:10:35.656514 containerd[1584]: time="2025-09-09T22:10:35.656460017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 22:10:35.656514 containerd[1584]: time="2025-09-09T22:10:35.656476068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 22:10:35.656514 containerd[1584]: time="2025-09-09T22:10:35.656504835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 22:10:35.656683 containerd[1584]: time="2025-09-09T22:10:35.656523244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 22:10:35.656683 containerd[1584]: time="2025-09-09T22:10:35.656539656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 22:10:35.656683 containerd[1584]: time="2025-09-09T22:10:35.656579309Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 22:10:35.656683 containerd[1584]: time="2025-09-09T22:10:35.656615941Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 22:10:35.656884 containerd[1584]: time="2025-09-09T22:10:35.656834221Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 22:10:35.656884 containerd[1584]: time="2025-09-09T22:10:35.656861404Z" level=info msg="Start snapshots syncer" Sep 9 22:10:35.656955 containerd[1584]: time="2025-09-09T22:10:35.656899412Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 22:10:35.657686 containerd[1584]: time="2025-09-09T22:10:35.657437796Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 22:10:35.657686 containerd[1584]: time="2025-09-09T22:10:35.657585265Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.657705073Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.657929106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.657964506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.657982108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.657997009Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.658019526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.658037946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.658052816Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.658106221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.658172676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.658194003Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.658235126Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.658259061Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 22:10:35.658978 containerd[1584]: time="2025-09-09T22:10:35.658274717Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 22:10:35.659452 containerd[1584]: time="2025-09-09T22:10:35.658290747Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 22:10:35.659452 containerd[1584]: time="2025-09-09T22:10:35.658303609Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 22:10:35.659452 containerd[1584]: time="2025-09-09T22:10:35.658319265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 22:10:35.659452 containerd[1584]: time="2025-09-09T22:10:35.658340893Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 22:10:35.659452 containerd[1584]: time="2025-09-09T22:10:35.658369411Z" level=info msg="runtime interface created" Sep 9 22:10:35.659452 containerd[1584]: time="2025-09-09T22:10:35.658389807Z" level=info msg="created NRI interface" Sep 9 22:10:35.659452 containerd[1584]: time="2025-09-09T22:10:35.658410710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 22:10:35.659452 containerd[1584]: time="2025-09-09T22:10:35.658431189Z" level=info msg="Connect containerd service" Sep 9 22:10:35.659452 containerd[1584]: time="2025-09-09T22:10:35.658465802Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 22:10:35.668489 containerd[1584]: time="2025-09-09T22:10:35.668416858Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 22:10:36.570811 containerd[1584]: time="2025-09-09T22:10:36.570697258Z" level=info msg="Start subscribing containerd event" Sep 9 22:10:36.570811 containerd[1584]: time="2025-09-09T22:10:36.570796873Z" level=info msg="Start recovering state" Sep 9 22:10:36.571466 containerd[1584]: time="2025-09-09T22:10:36.570981901Z" level=info msg="Start event monitor" Sep 9 22:10:36.571466 containerd[1584]: time="2025-09-09T22:10:36.571007060Z" level=info msg="Start cni network conf syncer for default" Sep 9 22:10:36.571466 containerd[1584]: time="2025-09-09T22:10:36.571020252Z" level=info msg="Start streaming server" Sep 9 22:10:36.571466 containerd[1584]: time="2025-09-09T22:10:36.571032446Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 22:10:36.571466 containerd[1584]: time="2025-09-09T22:10:36.571042474Z" level=info msg="runtime interface starting up..." Sep 9 22:10:36.571466 containerd[1584]: time="2025-09-09T22:10:36.571050792Z" level=info msg="starting plugins..." Sep 9 22:10:36.571466 containerd[1584]: time="2025-09-09T22:10:36.571074199Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 22:10:36.575204 containerd[1584]: time="2025-09-09T22:10:36.573996033Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 22:10:36.575204 containerd[1584]: time="2025-09-09T22:10:36.574110181Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 22:10:36.575204 containerd[1584]: time="2025-09-09T22:10:36.574207859Z" level=info msg="containerd successfully booted in 1.189016s" Sep 9 22:10:36.575007 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 22:10:39.834778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:10:39.835616 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 22:10:39.837019 systemd[1]: Startup finished in 4.667s (kernel) + 12.656s (initrd) + 12.513s (userspace) = 29.838s. Sep 9 22:10:39.860360 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 22:10:41.883322 kubelet[1702]: E0909 22:10:41.883240 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 22:10:41.888006 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 22:10:41.888221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 22:10:41.888749 systemd[1]: kubelet.service: Consumed 3.763s CPU time, 265.3M memory peak. Sep 9 22:10:45.812733 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:55030.service - OpenSSH per-connection server daemon (10.0.0.1:55030). Sep 9 22:10:46.001698 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 55030 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:10:46.010307 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:10:46.035067 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 22:10:46.046890 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 22:10:46.079078 systemd-logind[1563]: New session 1 of user core. Sep 9 22:10:46.101416 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 22:10:46.106486 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 22:10:46.134009 (systemd)[1716]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 22:10:46.141937 systemd-logind[1563]: New session c1 of user core. Sep 9 22:10:46.449924 systemd[1716]: Queued start job for default target default.target. Sep 9 22:10:46.474116 systemd[1716]: Created slice app.slice - User Application Slice. Sep 9 22:10:46.474587 systemd[1716]: Reached target paths.target - Paths. Sep 9 22:10:46.476020 systemd[1716]: Reached target timers.target - Timers. Sep 9 22:10:46.486398 systemd[1716]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 22:10:46.577032 systemd[1716]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 22:10:46.577253 systemd[1716]: Reached target sockets.target - Sockets. Sep 9 22:10:46.577341 systemd[1716]: Reached target basic.target - Basic System. Sep 9 22:10:46.577396 systemd[1716]: Reached target default.target - Main User Target. Sep 9 22:10:46.577447 systemd[1716]: Startup finished in 412ms. Sep 9 22:10:46.577740 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 22:10:46.600143 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 22:10:46.675404 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:55044.service - OpenSSH per-connection server daemon (10.0.0.1:55044). Sep 9 22:10:46.793200 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 55044 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:10:46.796420 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:10:46.809005 systemd-logind[1563]: New session 2 of user core. Sep 9 22:10:46.828289 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 22:10:46.928223 sshd[1730]: Connection closed by 10.0.0.1 port 55044 Sep 9 22:10:46.925686 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 9 22:10:46.953842 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:55044.service: Deactivated successfully. Sep 9 22:10:46.957255 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 22:10:46.959078 systemd-logind[1563]: Session 2 logged out. Waiting for processes to exit. Sep 9 22:10:46.964793 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:55058.service - OpenSSH per-connection server daemon (10.0.0.1:55058). Sep 9 22:10:46.967327 systemd-logind[1563]: Removed session 2. Sep 9 22:10:47.063855 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 55058 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:10:47.066405 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:10:47.079644 systemd-logind[1563]: New session 3 of user core. Sep 9 22:10:47.097248 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 22:10:47.158789 sshd[1739]: Connection closed by 10.0.0.1 port 55058 Sep 9 22:10:47.162547 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 9 22:10:47.175677 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:55058.service: Deactivated successfully. Sep 9 22:10:47.178835 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 22:10:47.213023 systemd-logind[1563]: Session 3 logged out. Waiting for processes to exit. Sep 9 22:10:47.225395 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:55070.service - OpenSSH per-connection server daemon (10.0.0.1:55070). Sep 9 22:10:47.227862 systemd-logind[1563]: Removed session 3. Sep 9 22:10:47.382815 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 55070 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:10:47.387903 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:10:47.426469 systemd-logind[1563]: New session 4 of user core. Sep 9 22:10:47.441368 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 22:10:47.538277 sshd[1748]: Connection closed by 10.0.0.1 port 55070 Sep 9 22:10:47.541248 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Sep 9 22:10:47.572028 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:55070.service: Deactivated successfully. Sep 9 22:10:47.582794 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 22:10:47.596067 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Sep 9 22:10:47.617096 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:55074.service - OpenSSH per-connection server daemon (10.0.0.1:55074). Sep 9 22:10:47.618112 systemd-logind[1563]: Removed session 4. Sep 9 22:10:47.812293 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 55074 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:10:47.817908 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:10:47.852160 systemd-logind[1563]: New session 5 of user core. Sep 9 22:10:47.875225 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 22:10:48.035462 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 22:10:48.035883 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 22:10:48.105417 sudo[1758]: pam_unix(sudo:session): session closed for user root Sep 9 22:10:48.117764 sshd[1757]: Connection closed by 10.0.0.1 port 55074 Sep 9 22:10:48.122310 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Sep 9 22:10:48.149618 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:55074.service: Deactivated successfully. Sep 9 22:10:48.159790 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 22:10:48.163922 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Sep 9 22:10:48.194886 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:55080.service - OpenSSH per-connection server daemon (10.0.0.1:55080). Sep 9 22:10:48.203325 systemd-logind[1563]: Removed session 5. Sep 9 22:10:48.306864 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 55080 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:10:48.307841 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:10:48.337029 systemd-logind[1563]: New session 6 of user core. Sep 9 22:10:48.366951 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 22:10:48.465541 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 22:10:48.466426 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 22:10:49.215806 sudo[1769]: pam_unix(sudo:session): session closed for user root Sep 9 22:10:49.234184 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 22:10:49.234641 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 22:10:49.274496 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 22:10:49.401276 augenrules[1791]: No rules Sep 9 22:10:49.408178 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 22:10:49.408653 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 22:10:49.415587 sudo[1768]: pam_unix(sudo:session): session closed for user root Sep 9 22:10:49.423277 sshd[1767]: Connection closed by 10.0.0.1 port 55080 Sep 9 22:10:49.427566 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Sep 9 22:10:49.451641 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:55080.service: Deactivated successfully. Sep 9 22:10:49.458948 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 22:10:49.467502 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Sep 9 22:10:49.485038 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:55090.service - OpenSSH per-connection server daemon (10.0.0.1:55090). Sep 9 22:10:49.489396 systemd-logind[1563]: Removed session 6. Sep 9 22:10:49.611212 sshd[1800]: Accepted publickey for core from 10.0.0.1 port 55090 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:10:49.614805 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:10:49.643096 systemd-logind[1563]: New session 7 of user core. Sep 9 22:10:49.666570 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 22:10:49.748804 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 22:10:49.749392 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 22:10:50.573797 kernel: hrtimer: interrupt took 8015715 ns Sep 9 22:10:52.053764 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 22:10:52.076731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 22:10:53.266237 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 22:10:53.304281 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 22:10:53.412370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:10:53.424920 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 22:10:54.517689 kubelet[1832]: E0909 22:10:54.517601 1832 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 22:10:54.534302 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 22:10:54.536491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 22:10:54.537974 systemd[1]: kubelet.service: Consumed 1.627s CPU time, 112.2M memory peak. Sep 9 22:10:56.846403 dockerd[1827]: time="2025-09-09T22:10:56.845189635Z" level=info msg="Starting up" Sep 9 22:10:56.854663 dockerd[1827]: time="2025-09-09T22:10:56.848963924Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 22:10:57.017962 dockerd[1827]: time="2025-09-09T22:10:57.017550685Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 22:10:57.447888 dockerd[1827]: time="2025-09-09T22:10:57.445989072Z" level=info msg="Loading containers: start." Sep 9 22:10:57.523148 kernel: Initializing XFRM netlink socket Sep 9 22:10:58.938588 systemd-networkd[1474]: docker0: Link UP Sep 9 22:10:58.956797 dockerd[1827]: time="2025-09-09T22:10:58.955756992Z" level=info msg="Loading containers: done." Sep 9 22:10:58.997438 dockerd[1827]: time="2025-09-09T22:10:58.997336661Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 22:10:58.997679 dockerd[1827]: time="2025-09-09T22:10:58.997475352Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 22:10:58.997679 dockerd[1827]: time="2025-09-09T22:10:58.997623465Z" level=info msg="Initializing buildkit" Sep 9 22:10:59.285029 dockerd[1827]: time="2025-09-09T22:10:59.284416223Z" level=info msg="Completed buildkit initialization" Sep 9 22:10:59.297723 dockerd[1827]: time="2025-09-09T22:10:59.297605672Z" level=info msg="Daemon has completed initialization" Sep 9 22:10:59.300411 dockerd[1827]: time="2025-09-09T22:10:59.298025427Z" level=info msg="API listen on /run/docker.sock" Sep 9 22:10:59.298418 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 22:11:02.207769 containerd[1584]: time="2025-09-09T22:11:02.206916166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 22:11:04.425025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083240152.mount: Deactivated successfully. Sep 9 22:11:04.551450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 22:11:04.558008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 22:11:05.329570 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:11:05.359343 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 22:11:05.614094 kubelet[2084]: E0909 22:11:05.612774 2084 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 22:11:05.623338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 22:11:05.623610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 22:11:05.624432 systemd[1]: kubelet.service: Consumed 655ms CPU time, 111.1M memory peak. Sep 9 22:11:11.155108 containerd[1584]: time="2025-09-09T22:11:11.154897136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:11.161017 containerd[1584]: time="2025-09-09T22:11:11.160748632Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 9 22:11:11.168632 containerd[1584]: time="2025-09-09T22:11:11.168506072Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:11.179576 containerd[1584]: time="2025-09-09T22:11:11.179172222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:11.185252 containerd[1584]: time="2025-09-09T22:11:11.182762805Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 8.975711206s" Sep 9 22:11:11.185252 containerd[1584]: time="2025-09-09T22:11:11.184249516Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 22:11:11.189137 containerd[1584]: time="2025-09-09T22:11:11.189072622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 22:11:15.790580 containerd[1584]: time="2025-09-09T22:11:15.790449500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:15.792839 containerd[1584]: time="2025-09-09T22:11:15.792194707Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 9 22:11:15.799599 containerd[1584]: time="2025-09-09T22:11:15.796045460Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:15.813044 containerd[1584]: time="2025-09-09T22:11:15.812872052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:15.816507 containerd[1584]: time="2025-09-09T22:11:15.814296485Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 4.623363492s" Sep 9 22:11:15.816507 containerd[1584]: time="2025-09-09T22:11:15.814371211Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 22:11:15.819719 containerd[1584]: time="2025-09-09T22:11:15.819096602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 22:11:15.820411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 22:11:15.837028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 22:11:16.953995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:11:16.994505 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 22:11:17.363839 kubelet[2144]: E0909 22:11:17.362831 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 22:11:17.374080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 22:11:17.374401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 22:11:17.378762 systemd[1]: kubelet.service: Consumed 1.033s CPU time, 110.1M memory peak. Sep 9 22:11:19.017535 update_engine[1565]: I20250909 22:11:19.017405 1565 update_attempter.cc:509] Updating boot flags... Sep 9 22:11:20.236067 containerd[1584]: time="2025-09-09T22:11:20.235666889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:20.271354 containerd[1584]: time="2025-09-09T22:11:20.271222488Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 9 22:11:20.305501 containerd[1584]: time="2025-09-09T22:11:20.305205437Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:20.377899 containerd[1584]: time="2025-09-09T22:11:20.377781245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:20.381992 containerd[1584]: time="2025-09-09T22:11:20.381910429Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 4.560621919s" Sep 9 22:11:20.383006 containerd[1584]: time="2025-09-09T22:11:20.382258570Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 22:11:20.384865 containerd[1584]: time="2025-09-09T22:11:20.384554588Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 22:11:23.901218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880383915.mount: Deactivated successfully. Sep 9 22:11:27.552145 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 9 22:11:27.698778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 22:11:28.254692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:11:28.285400 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 22:11:28.410419 kubelet[2187]: E0909 22:11:28.410241 2187 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 22:11:28.415090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 22:11:28.415296 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 22:11:28.415771 systemd[1]: kubelet.service: Consumed 370ms CPU time, 110.7M memory peak. Sep 9 22:11:28.680944 containerd[1584]: time="2025-09-09T22:11:28.680824624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:28.681734 containerd[1584]: time="2025-09-09T22:11:28.681666217Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 9 22:11:28.683735 containerd[1584]: time="2025-09-09T22:11:28.683622788Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:28.686256 containerd[1584]: time="2025-09-09T22:11:28.686099759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:28.686881 containerd[1584]: time="2025-09-09T22:11:28.686748792Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 8.302117844s" Sep 9 22:11:28.686881 containerd[1584]: time="2025-09-09T22:11:28.686818771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 22:11:28.687751 containerd[1584]: time="2025-09-09T22:11:28.687445557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 22:11:31.883404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162736828.mount: Deactivated successfully. Sep 9 22:11:36.899972 containerd[1584]: time="2025-09-09T22:11:36.898502970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:36.904563 containerd[1584]: time="2025-09-09T22:11:36.903072289Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 22:11:36.910506 containerd[1584]: time="2025-09-09T22:11:36.910379268Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:36.919873 containerd[1584]: time="2025-09-09T22:11:36.917819059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:36.919873 containerd[1584]: time="2025-09-09T22:11:36.919285305Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 8.231785784s" Sep 9 22:11:36.919873 containerd[1584]: time="2025-09-09T22:11:36.919330478Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 22:11:36.921316 containerd[1584]: time="2025-09-09T22:11:36.921030146Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 22:11:37.687831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618400246.mount: Deactivated successfully. Sep 9 22:11:37.706253 containerd[1584]: time="2025-09-09T22:11:37.706158689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 22:11:37.708771 containerd[1584]: time="2025-09-09T22:11:37.708327682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 22:11:37.711247 containerd[1584]: time="2025-09-09T22:11:37.710829639Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 22:11:37.714749 containerd[1584]: time="2025-09-09T22:11:37.714514048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 22:11:37.716188 containerd[1584]: time="2025-09-09T22:11:37.715245202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 794.164562ms" Sep 9 22:11:37.716188 containerd[1584]: time="2025-09-09T22:11:37.715309936Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 22:11:37.716188 containerd[1584]: time="2025-09-09T22:11:37.715956436Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 22:11:38.445964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 9 22:11:38.452417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 22:11:38.468518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218859278.mount: Deactivated successfully. Sep 9 22:11:38.859837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:11:38.881435 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 22:11:39.094795 kubelet[2263]: E0909 22:11:39.094168 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 22:11:39.112998 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 22:11:39.113268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 22:11:39.113964 systemd[1]: kubelet.service: Consumed 372ms CPU time, 110.2M memory peak. Sep 9 22:11:45.349992 containerd[1584]: time="2025-09-09T22:11:45.349740946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:45.353061 containerd[1584]: time="2025-09-09T22:11:45.352936749Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 9 22:11:45.356477 containerd[1584]: time="2025-09-09T22:11:45.356385233Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:45.364145 containerd[1584]: time="2025-09-09T22:11:45.364040058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:11:45.365735 containerd[1584]: time="2025-09-09T22:11:45.365628556Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 7.649636137s" Sep 9 22:11:45.365735 containerd[1584]: time="2025-09-09T22:11:45.365690541Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 22:11:48.555436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:11:48.555686 systemd[1]: kubelet.service: Consumed 372ms CPU time, 110.2M memory peak. Sep 9 22:11:48.558523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 22:11:48.588589 systemd[1]: Reload requested from client PID 2352 ('systemctl') (unit session-7.scope)... Sep 9 22:11:48.588616 systemd[1]: Reloading... Sep 9 22:11:48.711739 zram_generator::config[2395]: No configuration found. Sep 9 22:11:49.415434 systemd[1]: Reloading finished in 826 ms. Sep 9 22:11:49.484548 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 22:11:49.484651 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 22:11:49.485103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:11:49.485157 systemd[1]: kubelet.service: Consumed 183ms CPU time, 98.2M memory peak. Sep 9 22:11:49.487196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 22:11:49.801687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:11:49.816288 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 22:11:49.936633 kubelet[2443]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 22:11:49.936633 kubelet[2443]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 22:11:49.936633 kubelet[2443]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 22:11:49.937157 kubelet[2443]: I0909 22:11:49.936744 2443 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 22:11:50.333496 kubelet[2443]: I0909 22:11:50.333393 2443 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 22:11:50.333496 kubelet[2443]: I0909 22:11:50.333451 2443 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 22:11:50.333837 kubelet[2443]: I0909 22:11:50.333808 2443 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 22:11:51.595980 kubelet[2443]: E0909 22:11:51.593831 2443 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 22:11:51.603883 kubelet[2443]: I0909 22:11:51.603070 2443 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 22:11:51.665090 kubelet[2443]: I0909 22:11:51.664832 2443 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 22:11:51.769471 kubelet[2443]: I0909 22:11:51.769376 2443 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 22:11:51.769678 kubelet[2443]: I0909 22:11:51.769594 2443 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 22:11:51.770835 kubelet[2443]: I0909 22:11:51.769825 2443 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 22:11:51.770835 kubelet[2443]: I0909 22:11:51.769993 2443 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 22:11:51.770835 kubelet[2443]: I0909 22:11:51.770351 2443 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 22:11:51.770835 kubelet[2443]: I0909 22:11:51.770368 2443 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 22:11:51.771466 kubelet[2443]: I0909 22:11:51.770821 2443 state_mem.go:36] "Initialized new in-memory state store" Sep 9 22:11:51.793981 kubelet[2443]: I0909 22:11:51.792935 2443 kubelet.go:408] "Attempting to sync node with API server" Sep 9 22:11:51.793981 kubelet[2443]: I0909 22:11:51.792999 2443 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 22:11:51.793981 kubelet[2443]: I0909 22:11:51.793075 2443 kubelet.go:314] "Adding apiserver pod source" Sep 9 22:11:51.793981 kubelet[2443]: I0909 22:11:51.793114 2443 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 22:11:51.816373 kubelet[2443]: I0909 22:11:51.814725 2443 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 22:11:51.816373 kubelet[2443]: I0909 22:11:51.815320 2443 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 22:11:51.816373 kubelet[2443]: W0909 22:11:51.815421 2443 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 22:11:51.836905 kubelet[2443]: I0909 22:11:51.832475 2443 server.go:1274] "Started kubelet" Sep 9 22:11:51.836905 kubelet[2443]: I0909 22:11:51.833268 2443 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 22:11:51.836905 kubelet[2443]: I0909 22:11:51.834591 2443 server.go:449] "Adding debug handlers to kubelet server" Sep 9 22:11:51.856084 kubelet[2443]: W0909 22:11:51.848355 2443 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 9 22:11:51.856924 kubelet[2443]: E0909 22:11:51.856873 2443 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 22:11:51.857031 kubelet[2443]: W0909 22:11:51.849011 2443 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 9 22:11:51.857092 kubelet[2443]: E0909 22:11:51.857038 2443 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 22:11:51.861071 kubelet[2443]: I0909 22:11:51.857089 2443 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 22:11:51.861071 kubelet[2443]: I0909 22:11:51.857599 2443 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 22:11:51.861071 kubelet[2443]: I0909 22:11:51.860163 2443 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 22:11:51.887654 kubelet[2443]: I0909 22:11:51.887582 2443 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 22:11:51.888400 kubelet[2443]: I0909 22:11:51.888372 2443 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 22:11:51.896965 kubelet[2443]: I0909 22:11:51.894855 2443 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 22:11:51.897462 kubelet[2443]: I0909 22:11:51.896073 2443 reconciler.go:26] "Reconciler: start to sync state" Sep 9 22:11:51.897667 kubelet[2443]: W0909 22:11:51.896977 2443 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 9 22:11:51.897866 kubelet[2443]: E0909 22:11:51.897848 2443 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 22:11:51.898009 kubelet[2443]: E0909 22:11:51.897778 2443 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 22:11:51.899385 kubelet[2443]: E0909 22:11:51.898375 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Sep 9 22:11:51.910371 kubelet[2443]: I0909 22:11:51.909464 2443 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 22:11:51.910371 kubelet[2443]: E0909 22:11:51.910187 2443 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 22:11:51.917882 kubelet[2443]: E0909 22:11:51.893737 2443 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863bcd7252e4eed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 22:11:51.832407789 +0000 UTC m=+2.010422785,LastTimestamp:2025-09-09 22:11:51.832407789 +0000 UTC m=+2.010422785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 22:11:51.930008 kubelet[2443]: I0909 22:11:51.929965 2443 factory.go:221] Registration of the containerd container factory successfully Sep 9 22:11:51.930218 kubelet[2443]: I0909 22:11:51.930204 2443 factory.go:221] Registration of the systemd container factory successfully Sep 9 22:11:51.965421 kubelet[2443]: I0909 22:11:51.965324 2443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 22:11:51.967746 kubelet[2443]: I0909 22:11:51.967724 2443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 22:11:51.967881 kubelet[2443]: I0909 22:11:51.967868 2443 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 22:11:51.967994 kubelet[2443]: I0909 22:11:51.967982 2443 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 22:11:51.968132 kubelet[2443]: E0909 22:11:51.968108 2443 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 22:11:51.968934 kubelet[2443]: W0909 22:11:51.968891 2443 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 9 22:11:51.969042 kubelet[2443]: E0909 22:11:51.969018 2443 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 22:11:51.969655 kubelet[2443]: I0909 22:11:51.969614 2443 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 22:11:51.969854 kubelet[2443]: I0909 22:11:51.969838 2443 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 22:11:51.969959 kubelet[2443]: I0909 22:11:51.969944 2443 state_mem.go:36] "Initialized new in-memory state store" Sep 9 22:11:51.978593 kubelet[2443]: I0909 22:11:51.978539 2443 policy_none.go:49] "None policy: Start" Sep 9 22:11:51.982166 kubelet[2443]: I0909 22:11:51.982035 2443 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 22:11:51.982166 kubelet[2443]: I0909 22:11:51.982091 2443 state_mem.go:35] "Initializing new in-memory state store" Sep 9 22:11:51.995844 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 22:11:51.998057 kubelet[2443]: E0909 22:11:51.998022 2443 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 22:11:52.022793 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 22:11:52.035821 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 22:11:52.068212 kubelet[2443]: I0909 22:11:52.065892 2443 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 22:11:52.068212 kubelet[2443]: I0909 22:11:52.066258 2443 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 22:11:52.068212 kubelet[2443]: I0909 22:11:52.066280 2443 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 22:11:52.071506 kubelet[2443]: I0909 22:11:52.069095 2443 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 22:11:52.081875 kubelet[2443]: E0909 22:11:52.081231 2443 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 22:11:52.100629 kubelet[2443]: E0909 22:11:52.100495 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Sep 9 22:11:52.112231 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 22:11:52.148387 systemd[1]: Created slice kubepods-burstable-podbe20d795376ad0f9e0e9240faf84fcd6.slice - libcontainer container kubepods-burstable-podbe20d795376ad0f9e0e9240faf84fcd6.slice. Sep 9 22:11:52.168097 kubelet[2443]: I0909 22:11:52.167800 2443 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 22:11:52.168483 kubelet[2443]: E0909 22:11:52.168431 2443 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 9 22:11:52.174210 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 22:11:52.200616 kubelet[2443]: I0909 22:11:52.198689 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 22:11:52.200616 kubelet[2443]: I0909 22:11:52.200094 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 22:11:52.201001 kubelet[2443]: I0909 22:11:52.200955 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 22:11:52.201243 kubelet[2443]: I0909 22:11:52.201197 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 22:11:52.201243 kubelet[2443]: I0909 22:11:52.201232 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be20d795376ad0f9e0e9240faf84fcd6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"be20d795376ad0f9e0e9240faf84fcd6\") " pod="kube-system/kube-apiserver-localhost" Sep 9 22:11:52.201335 kubelet[2443]: I0909 22:11:52.201255 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be20d795376ad0f9e0e9240faf84fcd6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"be20d795376ad0f9e0e9240faf84fcd6\") " pod="kube-system/kube-apiserver-localhost" Sep 9 22:11:52.201335 kubelet[2443]: I0909 22:11:52.201274 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be20d795376ad0f9e0e9240faf84fcd6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"be20d795376ad0f9e0e9240faf84fcd6\") " pod="kube-system/kube-apiserver-localhost" Sep 9 22:11:52.201335 kubelet[2443]: I0909 22:11:52.201296 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 22:11:52.201335 kubelet[2443]: I0909 22:11:52.201318 2443 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 22:11:52.371102 kubelet[2443]: I0909 22:11:52.370869 2443 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 22:11:52.371621 kubelet[2443]: E0909 22:11:52.371284 2443 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 9 22:11:52.442400 kubelet[2443]: E0909 22:11:52.441634 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:52.443737 containerd[1584]: time="2025-09-09T22:11:52.443144123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 22:11:52.470350 kubelet[2443]: E0909 22:11:52.469242 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:52.473918 containerd[1584]: time="2025-09-09T22:11:52.473410784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:be20d795376ad0f9e0e9240faf84fcd6,Namespace:kube-system,Attempt:0,}" Sep 9 22:11:52.491056 kubelet[2443]: E0909 22:11:52.488942 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:52.491262 containerd[1584]: time="2025-09-09T22:11:52.489544790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 22:11:52.506912 kubelet[2443]: E0909 22:11:52.505797 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Sep 9 22:11:52.635392 containerd[1584]: time="2025-09-09T22:11:52.633896613Z" level=info msg="connecting to shim d7ba7a65b978bf8892a0adb10abc4efbba230be29863e6436a9d1531cff40cf3" address="unix:///run/containerd/s/12c0a45bdf2a11b6d88d15fbddf92b8a40594e5deff201293fca0980f96c4582" namespace=k8s.io protocol=ttrpc version=3 Sep 9 22:11:52.644624 containerd[1584]: time="2025-09-09T22:11:52.644527520Z" level=info msg="connecting to shim b91042314f058719e7c32e37581568fef80e33da2214405e9b1244b54a156f77" address="unix:///run/containerd/s/567bc8b5ab190ab75d9bbcbe8f7f66da67cd3a94d4b7f9658bd456987bcaa33d" namespace=k8s.io protocol=ttrpc version=3 Sep 9 22:11:52.661924 containerd[1584]: time="2025-09-09T22:11:52.661797867Z" level=info msg="connecting to shim 8456bc6dc40049dfe4690eaf1e04dc27e577b3a374c6b665250e09da8aa88ae7" address="unix:///run/containerd/s/b35f95f78a17866f0b37b904913fa96a5fbe9733a87335a9b48e063cbf5601e3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 22:11:52.775319 kubelet[2443]: I0909 22:11:52.775276 2443 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 22:11:52.777129 kubelet[2443]: E0909 22:11:52.776606 2443 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 9 22:11:52.799269 kubelet[2443]: W0909 22:11:52.799208 2443 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 9 22:11:52.799572 kubelet[2443]: E0909 22:11:52.799461 2443 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 22:11:52.812264 systemd[1]: Started cri-containerd-d7ba7a65b978bf8892a0adb10abc4efbba230be29863e6436a9d1531cff40cf3.scope - libcontainer container d7ba7a65b978bf8892a0adb10abc4efbba230be29863e6436a9d1531cff40cf3. Sep 9 22:11:52.841579 systemd[1]: Started cri-containerd-8456bc6dc40049dfe4690eaf1e04dc27e577b3a374c6b665250e09da8aa88ae7.scope - libcontainer container 8456bc6dc40049dfe4690eaf1e04dc27e577b3a374c6b665250e09da8aa88ae7. Sep 9 22:11:52.851832 systemd[1]: Started cri-containerd-b91042314f058719e7c32e37581568fef80e33da2214405e9b1244b54a156f77.scope - libcontainer container b91042314f058719e7c32e37581568fef80e33da2214405e9b1244b54a156f77. Sep 9 22:11:52.994825 kubelet[2443]: W0909 22:11:52.994285 2443 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 9 22:11:52.994825 kubelet[2443]: E0909 22:11:52.994355 2443 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 22:11:53.126502 kubelet[2443]: W0909 22:11:53.126332 2443 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 9 22:11:53.126502 kubelet[2443]: E0909 22:11:53.126429 2443 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 22:11:53.310415 kubelet[2443]: E0909 22:11:53.310356 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" Sep 9 22:11:53.398170 containerd[1584]: time="2025-09-09T22:11:53.392378020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:be20d795376ad0f9e0e9240faf84fcd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7ba7a65b978bf8892a0adb10abc4efbba230be29863e6436a9d1531cff40cf3\"" Sep 9 22:11:53.419338 kubelet[2443]: E0909 22:11:53.413902 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:53.419338 kubelet[2443]: W0909 22:11:53.418396 2443 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Sep 9 22:11:53.419338 kubelet[2443]: E0909 22:11:53.418480 2443 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 22:11:53.428251 containerd[1584]: time="2025-09-09T22:11:53.426189870Z" level=info msg="CreateContainer within sandbox \"d7ba7a65b978bf8892a0adb10abc4efbba230be29863e6436a9d1531cff40cf3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 22:11:53.433389 containerd[1584]: time="2025-09-09T22:11:53.431943747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"8456bc6dc40049dfe4690eaf1e04dc27e577b3a374c6b665250e09da8aa88ae7\"" Sep 9 22:11:53.442267 kubelet[2443]: E0909 22:11:53.441870 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:53.466602 containerd[1584]: time="2025-09-09T22:11:53.458890764Z" level=info msg="CreateContainer within sandbox \"8456bc6dc40049dfe4690eaf1e04dc27e577b3a374c6b665250e09da8aa88ae7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 22:11:53.473734 containerd[1584]: time="2025-09-09T22:11:53.473649077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b91042314f058719e7c32e37581568fef80e33da2214405e9b1244b54a156f77\"" Sep 9 22:11:53.480858 kubelet[2443]: E0909 22:11:53.480470 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:53.491227 containerd[1584]: time="2025-09-09T22:11:53.491173271Z" level=info msg="CreateContainer within sandbox \"b91042314f058719e7c32e37581568fef80e33da2214405e9b1244b54a156f77\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 22:11:53.512809 containerd[1584]: time="2025-09-09T22:11:53.512741269Z" level=info msg="Container 7e46a0ba9e1412a1c3b19b5a6361bb3a4435814feabcda617760c153e7920861: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:11:53.532665 containerd[1584]: time="2025-09-09T22:11:53.531676652Z" level=info msg="Container c8a5e3a200dcd7f9d31349921942ecdd2f2d052aa540fa8f5e65dccece494d6b: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:11:53.574796 containerd[1584]: time="2025-09-09T22:11:53.573680999Z" level=info msg="CreateContainer within sandbox \"d7ba7a65b978bf8892a0adb10abc4efbba230be29863e6436a9d1531cff40cf3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7e46a0ba9e1412a1c3b19b5a6361bb3a4435814feabcda617760c153e7920861\"" Sep 9 22:11:53.575019 containerd[1584]: time="2025-09-09T22:11:53.574929511Z" level=info msg="StartContainer for \"7e46a0ba9e1412a1c3b19b5a6361bb3a4435814feabcda617760c153e7920861\"" Sep 9 22:11:53.575831 containerd[1584]: time="2025-09-09T22:11:53.575316043Z" level=info msg="Container b1e21f22f2282228f9baa9438f796aff1631c89fee819034fc2a1ad72861c022: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:11:53.576948 containerd[1584]: time="2025-09-09T22:11:53.576530688Z" level=info msg="connecting to shim 7e46a0ba9e1412a1c3b19b5a6361bb3a4435814feabcda617760c153e7920861" address="unix:///run/containerd/s/12c0a45bdf2a11b6d88d15fbddf92b8a40594e5deff201293fca0980f96c4582" protocol=ttrpc version=3 Sep 9 22:11:53.579809 kubelet[2443]: I0909 22:11:53.579684 2443 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 22:11:53.580474 kubelet[2443]: E0909 22:11:53.580439 2443 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Sep 9 22:11:53.605637 containerd[1584]: time="2025-09-09T22:11:53.605442888Z" level=info msg="CreateContainer within sandbox \"8456bc6dc40049dfe4690eaf1e04dc27e577b3a374c6b665250e09da8aa88ae7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c8a5e3a200dcd7f9d31349921942ecdd2f2d052aa540fa8f5e65dccece494d6b\"" Sep 9 22:11:53.606953 containerd[1584]: time="2025-09-09T22:11:53.606913834Z" level=info msg="StartContainer for \"c8a5e3a200dcd7f9d31349921942ecdd2f2d052aa540fa8f5e65dccece494d6b\"" Sep 9 22:11:53.608633 containerd[1584]: time="2025-09-09T22:11:53.608599911Z" level=info msg="connecting to shim c8a5e3a200dcd7f9d31349921942ecdd2f2d052aa540fa8f5e65dccece494d6b" address="unix:///run/containerd/s/b35f95f78a17866f0b37b904913fa96a5fbe9733a87335a9b48e063cbf5601e3" protocol=ttrpc version=3 Sep 9 22:11:53.611521 containerd[1584]: time="2025-09-09T22:11:53.609051763Z" level=info msg="CreateContainer within sandbox \"b91042314f058719e7c32e37581568fef80e33da2214405e9b1244b54a156f77\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b1e21f22f2282228f9baa9438f796aff1631c89fee819034fc2a1ad72861c022\"" Sep 9 22:11:53.611521 containerd[1584]: time="2025-09-09T22:11:53.609545398Z" level=info msg="StartContainer for \"b1e21f22f2282228f9baa9438f796aff1631c89fee819034fc2a1ad72861c022\"" Sep 9 22:11:53.611964 containerd[1584]: time="2025-09-09T22:11:53.611871421Z" level=info msg="connecting to shim b1e21f22f2282228f9baa9438f796aff1631c89fee819034fc2a1ad72861c022" address="unix:///run/containerd/s/567bc8b5ab190ab75d9bbcbe8f7f66da67cd3a94d4b7f9658bd456987bcaa33d" protocol=ttrpc version=3 Sep 9 22:11:53.643927 systemd[1]: Started cri-containerd-7e46a0ba9e1412a1c3b19b5a6361bb3a4435814feabcda617760c153e7920861.scope - libcontainer container 7e46a0ba9e1412a1c3b19b5a6361bb3a4435814feabcda617760c153e7920861. Sep 9 22:11:53.661109 systemd[1]: Started cri-containerd-b1e21f22f2282228f9baa9438f796aff1631c89fee819034fc2a1ad72861c022.scope - libcontainer container b1e21f22f2282228f9baa9438f796aff1631c89fee819034fc2a1ad72861c022. Sep 9 22:11:53.675820 systemd[1]: Started cri-containerd-c8a5e3a200dcd7f9d31349921942ecdd2f2d052aa540fa8f5e65dccece494d6b.scope - libcontainer container c8a5e3a200dcd7f9d31349921942ecdd2f2d052aa540fa8f5e65dccece494d6b. Sep 9 22:11:53.683901 kubelet[2443]: E0909 22:11:53.676907 2443 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.117:6443: connect: connection refused" logger="UnhandledError" Sep 9 22:11:53.930207 containerd[1584]: time="2025-09-09T22:11:53.929595198Z" level=info msg="StartContainer for \"c8a5e3a200dcd7f9d31349921942ecdd2f2d052aa540fa8f5e65dccece494d6b\" returns successfully" Sep 9 22:11:53.930207 containerd[1584]: time="2025-09-09T22:11:53.930325196Z" level=info msg="StartContainer for \"7e46a0ba9e1412a1c3b19b5a6361bb3a4435814feabcda617760c153e7920861\" returns successfully" Sep 9 22:11:53.953638 containerd[1584]: time="2025-09-09T22:11:53.953483177Z" level=info msg="StartContainer for \"b1e21f22f2282228f9baa9438f796aff1631c89fee819034fc2a1ad72861c022\" returns successfully" Sep 9 22:11:53.999496 kubelet[2443]: E0909 22:11:53.999439 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:54.005222 kubelet[2443]: E0909 22:11:54.005176 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:54.018020 kubelet[2443]: E0909 22:11:54.017959 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:55.015137 kubelet[2443]: E0909 22:11:55.014933 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:55.188351 kubelet[2443]: I0909 22:11:55.186629 2443 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 22:11:56.442226 kubelet[2443]: E0909 22:11:56.442098 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:56.793693 kubelet[2443]: I0909 22:11:56.791473 2443 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 22:11:56.793693 kubelet[2443]: E0909 22:11:56.791526 2443 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 22:11:56.816273 kubelet[2443]: I0909 22:11:56.806844 2443 apiserver.go:52] "Watching apiserver" Sep 9 22:11:56.893539 kubelet[2443]: E0909 22:11:56.893463 2443 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Sep 9 22:11:56.893539 kubelet[2443]: E0909 22:11:56.893570 2443 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1863bcd7252e4eed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 22:11:51.832407789 +0000 UTC m=+2.010422785,LastTimestamp:2025-09-09 22:11:51.832407789 +0000 UTC m=+2.010422785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 22:11:56.898680 kubelet[2443]: I0909 22:11:56.898582 2443 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 22:11:58.009803 kubelet[2443]: E0909 22:11:58.009371 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:11:58.036754 kubelet[2443]: E0909 22:11:58.035123 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:00.314164 kubelet[2443]: E0909 22:12:00.313407 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:01.044548 kubelet[2443]: E0909 22:12:01.044452 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:01.354132 systemd[1]: Reload requested from client PID 2721 ('systemctl') (unit session-7.scope)... Sep 9 22:12:01.354158 systemd[1]: Reloading... Sep 9 22:12:01.758083 zram_generator::config[2765]: No configuration found. Sep 9 22:12:02.487333 kubelet[2443]: I0909 22:12:02.487127 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.487093412 podStartE2EDuration="5.487093412s" podCreationTimestamp="2025-09-09 22:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 22:12:02.486593973 +0000 UTC m=+12.664608969" watchObservedRunningTime="2025-09-09 22:12:02.487093412 +0000 UTC m=+12.665108418" Sep 9 22:12:02.740801 systemd[1]: Reloading finished in 1386 ms. Sep 9 22:12:02.842336 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 22:12:02.892136 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 22:12:02.893051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:12:02.893326 systemd[1]: kubelet.service: Consumed 2.035s CPU time, 133M memory peak. Sep 9 22:12:02.906644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 22:12:03.526943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 22:12:03.555310 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 22:12:04.364054 sudo[2824]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 22:12:04.365973 sudo[2824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 22:12:04.624407 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1069960640 wd_nsec: 1069960137 Sep 9 22:12:04.638058 kubelet[2811]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 22:12:04.638058 kubelet[2811]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 22:12:04.638058 kubelet[2811]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 22:12:04.638058 kubelet[2811]: I0909 22:12:04.638004 2811 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 22:12:04.667172 kubelet[2811]: I0909 22:12:04.664632 2811 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 22:12:04.667172 kubelet[2811]: I0909 22:12:04.664687 2811 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 22:12:04.667172 kubelet[2811]: I0909 22:12:04.665026 2811 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 22:12:04.673731 kubelet[2811]: I0909 22:12:04.672832 2811 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 22:12:04.688846 kubelet[2811]: I0909 22:12:04.686019 2811 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 22:12:04.708161 kubelet[2811]: I0909 22:12:04.708127 2811 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 22:12:04.738732 kubelet[2811]: I0909 22:12:04.738469 2811 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 22:12:04.738732 kubelet[2811]: I0909 22:12:04.738670 2811 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 22:12:04.739322 kubelet[2811]: I0909 22:12:04.739269 2811 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 22:12:04.739618 kubelet[2811]: I0909 22:12:04.739399 2811 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 22:12:04.739825 kubelet[2811]: I0909 22:12:04.739809 2811 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 22:12:04.740058 kubelet[2811]: I0909 22:12:04.740022 2811 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 22:12:04.740253 kubelet[2811]: I0909 22:12:04.740229 2811 state_mem.go:36] "Initialized new in-memory state store" Sep 9 22:12:04.740667 kubelet[2811]: I0909 22:12:04.740623 2811 kubelet.go:408] "Attempting to sync node with API server" Sep 9 22:12:04.741988 kubelet[2811]: I0909 22:12:04.741780 2811 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 22:12:04.741988 kubelet[2811]: I0909 22:12:04.741846 2811 kubelet.go:314] "Adding apiserver pod source" Sep 9 22:12:04.741988 kubelet[2811]: I0909 22:12:04.741873 2811 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 22:12:04.747105 kubelet[2811]: I0909 22:12:04.747053 2811 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 22:12:04.748631 kubelet[2811]: I0909 22:12:04.748537 2811 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 22:12:04.753744 kubelet[2811]: I0909 22:12:04.750522 2811 server.go:1274] "Started kubelet" Sep 9 22:12:04.753744 kubelet[2811]: I0909 22:12:04.752243 2811 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 22:12:04.753744 kubelet[2811]: I0909 22:12:04.752932 2811 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 22:12:04.758751 kubelet[2811]: I0909 22:12:04.757126 2811 server.go:449] "Adding debug handlers to kubelet server" Sep 9 22:12:04.762740 kubelet[2811]: I0909 22:12:04.762593 2811 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 22:12:04.773077 kubelet[2811]: I0909 22:12:04.771253 2811 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 22:12:04.783427 kubelet[2811]: I0909 22:12:04.783346 2811 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 22:12:04.783740 kubelet[2811]: I0909 22:12:04.783668 2811 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 22:12:04.791931 kubelet[2811]: I0909 22:12:04.791845 2811 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 22:12:04.795737 kubelet[2811]: E0909 22:12:04.795497 2811 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 22:12:04.804038 kubelet[2811]: I0909 22:12:04.803958 2811 factory.go:221] Registration of the containerd container factory successfully Sep 9 22:12:04.804038 kubelet[2811]: I0909 22:12:04.804018 2811 factory.go:221] Registration of the systemd container factory successfully Sep 9 22:12:04.849649 kubelet[2811]: I0909 22:12:04.849310 2811 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 22:12:04.860734 kubelet[2811]: I0909 22:12:04.860576 2811 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 22:12:04.860734 kubelet[2811]: I0909 22:12:04.860655 2811 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 22:12:04.861047 kubelet[2811]: I0909 22:12:04.861032 2811 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 22:12:04.865212 kubelet[2811]: E0909 22:12:04.865135 2811 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 22:12:05.052364 kubelet[2811]: E0909 22:12:05.041193 2811 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.042229 2811 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.042259 2811 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.042329 2811 state_mem.go:36] "Initialized new in-memory state store" Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.043053 2811 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.043111 2811 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.043202 2811 policy_none.go:49] "None policy: Start" Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.045675 2811 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.045750 2811 state_mem.go:35] "Initializing new in-memory state store" Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.046081 2811 state_mem.go:75] "Updated machine memory state" Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.050537 2811 reconciler.go:26] "Reconciler: start to sync state" Sep 9 22:12:05.052364 kubelet[2811]: I0909 22:12:05.050823 2811 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 22:12:05.064067 kubelet[2811]: I0909 22:12:05.061118 2811 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 22:12:05.064067 kubelet[2811]: I0909 22:12:05.061414 2811 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 22:12:05.064067 kubelet[2811]: I0909 22:12:05.061432 2811 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 22:12:05.065649 kubelet[2811]: I0909 22:12:05.065106 2811 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 22:12:05.219424 kubelet[2811]: I0909 22:12:05.218294 2811 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 22:12:05.342624 kubelet[2811]: I0909 22:12:05.341670 2811 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 22:12:05.342624 kubelet[2811]: I0909 22:12:05.341989 2811 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 22:12:05.353109 kubelet[2811]: I0909 22:12:05.352219 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 22:12:05.353109 kubelet[2811]: I0909 22:12:05.352398 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 22:12:05.353109 kubelet[2811]: I0909 22:12:05.352517 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 22:12:05.353109 kubelet[2811]: I0909 22:12:05.352584 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be20d795376ad0f9e0e9240faf84fcd6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"be20d795376ad0f9e0e9240faf84fcd6\") " pod="kube-system/kube-apiserver-localhost" Sep 9 22:12:05.353109 kubelet[2811]: I0909 22:12:05.352616 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 22:12:05.353473 kubelet[2811]: I0909 22:12:05.352803 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 22:12:05.353473 kubelet[2811]: I0909 22:12:05.352847 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 22:12:05.353473 kubelet[2811]: I0909 22:12:05.352881 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be20d795376ad0f9e0e9240faf84fcd6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"be20d795376ad0f9e0e9240faf84fcd6\") " pod="kube-system/kube-apiserver-localhost" Sep 9 22:12:05.353473 kubelet[2811]: I0909 22:12:05.352955 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be20d795376ad0f9e0e9240faf84fcd6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"be20d795376ad0f9e0e9240faf84fcd6\") " pod="kube-system/kube-apiserver-localhost" Sep 9 22:12:05.363359 kubelet[2811]: E0909 22:12:05.362301 2811 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 22:12:05.369567 kubelet[2811]: E0909 22:12:05.369328 2811 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 22:12:05.644802 kubelet[2811]: E0909 22:12:05.643927 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:05.663246 kubelet[2811]: E0909 22:12:05.663194 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:05.670887 kubelet[2811]: E0909 22:12:05.670378 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:05.705558 sudo[2824]: pam_unix(sudo:session): session closed for user root Sep 9 22:12:05.742828 kubelet[2811]: I0909 22:12:05.742748 2811 apiserver.go:52] "Watching apiserver" Sep 9 22:12:05.784776 kubelet[2811]: I0909 22:12:05.784726 2811 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 22:12:05.921493 kubelet[2811]: E0909 22:12:05.921191 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:05.921913 kubelet[2811]: E0909 22:12:05.921881 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:05.922876 kubelet[2811]: E0909 22:12:05.922847 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:06.452741 kubelet[2811]: I0909 22:12:06.452240 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.452203329 podStartE2EDuration="1.452203329s" podCreationTimestamp="2025-09-09 22:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 22:12:06.196933623 +0000 UTC m=+2.627459026" watchObservedRunningTime="2025-09-09 22:12:06.452203329 +0000 UTC m=+2.882728732" Sep 9 22:12:06.502873 kubelet[2811]: I0909 22:12:06.502796 2811 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 22:12:06.504335 containerd[1584]: time="2025-09-09T22:12:06.504265330Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 22:12:06.506549 kubelet[2811]: I0909 22:12:06.506519 2811 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 22:12:06.922974 kubelet[2811]: E0909 22:12:06.922929 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:07.411785 systemd[1]: Created slice kubepods-besteffort-podb07afa22_e88b_41a8_8276_f7add37c8f5b.slice - libcontainer container kubepods-besteffort-podb07afa22_e88b_41a8_8276_f7add37c8f5b.slice. Sep 9 22:12:07.429922 systemd[1]: Created slice kubepods-burstable-podb4ddd82f_b23b_4294_ac2f_16085266df62.slice - libcontainer container kubepods-burstable-podb4ddd82f_b23b_4294_ac2f_16085266df62.slice. Sep 9 22:12:07.462480 systemd[1]: Created slice kubepods-besteffort-pod3d4ca5c4_f14a_4a9d_9f31_92bc61c3ff7c.slice - libcontainer container kubepods-besteffort-pod3d4ca5c4_f14a_4a9d_9f31_92bc61c3ff7c.slice. Sep 9 22:12:07.465008 kubelet[2811]: I0909 22:12:07.464950 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-lib-modules\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465008 kubelet[2811]: I0909 22:12:07.464990 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-xtables-lock\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465008 kubelet[2811]: I0909 22:12:07.465009 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cni-path\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465356 kubelet[2811]: I0909 22:12:07.465030 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4ddd82f-b23b-4294-ac2f-16085266df62-hubble-tls\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465356 kubelet[2811]: I0909 22:12:07.465054 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-run\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465356 kubelet[2811]: I0909 22:12:07.465078 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4ddd82f-b23b-4294-ac2f-16085266df62-clustermesh-secrets\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465356 kubelet[2811]: I0909 22:12:07.465098 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-config-path\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465356 kubelet[2811]: I0909 22:12:07.465120 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vm5n\" (UniqueName: \"kubernetes.io/projected/b07afa22-e88b-41a8-8276-f7add37c8f5b-kube-api-access-6vm5n\") pod \"kube-proxy-x4b22\" (UID: \"b07afa22-e88b-41a8-8276-f7add37c8f5b\") " pod="kube-system/kube-proxy-x4b22" Sep 9 22:12:07.465647 kubelet[2811]: I0909 22:12:07.465142 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mcmw\" (UniqueName: \"kubernetes.io/projected/b4ddd82f-b23b-4294-ac2f-16085266df62-kube-api-access-9mcmw\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465647 kubelet[2811]: I0909 22:12:07.465163 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-host-proc-sys-net\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465647 kubelet[2811]: I0909 22:12:07.465200 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-bpf-maps\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465647 kubelet[2811]: I0909 22:12:07.465221 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-cgroup\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465647 kubelet[2811]: I0909 22:12:07.465241 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-etc-cni-netd\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.465647 kubelet[2811]: I0909 22:12:07.465260 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b07afa22-e88b-41a8-8276-f7add37c8f5b-lib-modules\") pod \"kube-proxy-x4b22\" (UID: \"b07afa22-e88b-41a8-8276-f7add37c8f5b\") " pod="kube-system/kube-proxy-x4b22" Sep 9 22:12:07.466073 kubelet[2811]: I0909 22:12:07.465280 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-hostproc\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.466073 kubelet[2811]: I0909 22:12:07.465314 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-host-proc-sys-kernel\") pod \"cilium-txwlg\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " pod="kube-system/cilium-txwlg" Sep 9 22:12:07.466073 kubelet[2811]: I0909 22:12:07.465335 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b07afa22-e88b-41a8-8276-f7add37c8f5b-kube-proxy\") pod \"kube-proxy-x4b22\" (UID: \"b07afa22-e88b-41a8-8276-f7add37c8f5b\") " pod="kube-system/kube-proxy-x4b22" Sep 9 22:12:07.466073 kubelet[2811]: I0909 22:12:07.465358 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b07afa22-e88b-41a8-8276-f7add37c8f5b-xtables-lock\") pod \"kube-proxy-x4b22\" (UID: \"b07afa22-e88b-41a8-8276-f7add37c8f5b\") " pod="kube-system/kube-proxy-x4b22" Sep 9 22:12:07.566121 kubelet[2811]: I0909 22:12:07.566030 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnn9x\" (UniqueName: \"kubernetes.io/projected/3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c-kube-api-access-fnn9x\") pod \"cilium-operator-5d85765b45-dm46w\" (UID: \"3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c\") " pod="kube-system/cilium-operator-5d85765b45-dm46w" Sep 9 22:12:07.566386 kubelet[2811]: I0909 22:12:07.566351 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c-cilium-config-path\") pod \"cilium-operator-5d85765b45-dm46w\" (UID: \"3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c\") " pod="kube-system/cilium-operator-5d85765b45-dm46w" Sep 9 22:12:07.726307 kubelet[2811]: E0909 22:12:07.725576 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:07.726540 containerd[1584]: time="2025-09-09T22:12:07.726482050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4b22,Uid:b07afa22-e88b-41a8-8276-f7add37c8f5b,Namespace:kube-system,Attempt:0,}" Sep 9 22:12:07.739327 kubelet[2811]: E0909 22:12:07.739264 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:07.740026 containerd[1584]: time="2025-09-09T22:12:07.739961780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-txwlg,Uid:b4ddd82f-b23b-4294-ac2f-16085266df62,Namespace:kube-system,Attempt:0,}" Sep 9 22:12:07.766961 kubelet[2811]: E0909 22:12:07.766920 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:07.767499 containerd[1584]: time="2025-09-09T22:12:07.767454175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dm46w,Uid:3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c,Namespace:kube-system,Attempt:0,}" Sep 9 22:12:07.925731 kubelet[2811]: E0909 22:12:07.924363 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:10.644486 kubelet[2811]: E0909 22:12:10.644233 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:10.657017 containerd[1584]: time="2025-09-09T22:12:10.656694999Z" level=info msg="connecting to shim 719690c56496275e545e6f3b56e2d58bb76a4922eaf867929ffeed26a5e73e16" address="unix:///run/containerd/s/1b8f0b7c915719f0d7ea2970f473632e702948348a22b3cd96d49e143d91db2a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 22:12:10.679157 containerd[1584]: time="2025-09-09T22:12:10.679067188Z" level=info msg="connecting to shim 9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788" address="unix:///run/containerd/s/5d9b23f21ad392381ef47f7d358f60101ee1707fb0d5cedd1d5845d60cc89e33" namespace=k8s.io protocol=ttrpc version=3 Sep 9 22:12:10.695385 containerd[1584]: time="2025-09-09T22:12:10.695307370Z" level=info msg="connecting to shim 34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605" address="unix:///run/containerd/s/6a408ea5545aa76b521be4e0fca6f920c42e1b3b1c94409757e7b81e296e1ce5" namespace=k8s.io protocol=ttrpc version=3 Sep 9 22:12:10.724079 systemd[1]: Started cri-containerd-719690c56496275e545e6f3b56e2d58bb76a4922eaf867929ffeed26a5e73e16.scope - libcontainer container 719690c56496275e545e6f3b56e2d58bb76a4922eaf867929ffeed26a5e73e16. Sep 9 22:12:10.732282 systemd[1]: Started cri-containerd-34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605.scope - libcontainer container 34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605. Sep 9 22:12:10.735084 systemd[1]: Started cri-containerd-9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788.scope - libcontainer container 9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788. Sep 9 22:12:10.784748 containerd[1584]: time="2025-09-09T22:12:10.784271894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4b22,Uid:b07afa22-e88b-41a8-8276-f7add37c8f5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"719690c56496275e545e6f3b56e2d58bb76a4922eaf867929ffeed26a5e73e16\"" Sep 9 22:12:10.785988 kubelet[2811]: E0909 22:12:10.785957 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:10.791489 containerd[1584]: time="2025-09-09T22:12:10.791394712Z" level=info msg="CreateContainer within sandbox \"719690c56496275e545e6f3b56e2d58bb76a4922eaf867929ffeed26a5e73e16\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 22:12:10.802823 containerd[1584]: time="2025-09-09T22:12:10.802649420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-txwlg,Uid:b4ddd82f-b23b-4294-ac2f-16085266df62,Namespace:kube-system,Attempt:0,} returns sandbox id \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\"" Sep 9 22:12:10.805691 kubelet[2811]: E0909 22:12:10.805567 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:10.814347 containerd[1584]: time="2025-09-09T22:12:10.814259802Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 22:12:10.833059 containerd[1584]: time="2025-09-09T22:12:10.832977693Z" level=info msg="Container 844407372477edf53f9e76735b720bb63c2476819d6d38bb9c94025b53b37aec: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:12:10.834381 containerd[1584]: time="2025-09-09T22:12:10.834326243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dm46w,Uid:3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\"" Sep 9 22:12:10.835552 kubelet[2811]: E0909 22:12:10.835505 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:10.846449 containerd[1584]: time="2025-09-09T22:12:10.846376216Z" level=info msg="CreateContainer within sandbox \"719690c56496275e545e6f3b56e2d58bb76a4922eaf867929ffeed26a5e73e16\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"844407372477edf53f9e76735b720bb63c2476819d6d38bb9c94025b53b37aec\"" Sep 9 22:12:10.847451 containerd[1584]: time="2025-09-09T22:12:10.847412472Z" level=info msg="StartContainer for \"844407372477edf53f9e76735b720bb63c2476819d6d38bb9c94025b53b37aec\"" Sep 9 22:12:10.849868 containerd[1584]: time="2025-09-09T22:12:10.849819780Z" level=info msg="connecting to shim 844407372477edf53f9e76735b720bb63c2476819d6d38bb9c94025b53b37aec" address="unix:///run/containerd/s/1b8f0b7c915719f0d7ea2970f473632e702948348a22b3cd96d49e143d91db2a" protocol=ttrpc version=3 Sep 9 22:12:10.880021 systemd[1]: Started cri-containerd-844407372477edf53f9e76735b720bb63c2476819d6d38bb9c94025b53b37aec.scope - libcontainer container 844407372477edf53f9e76735b720bb63c2476819d6d38bb9c94025b53b37aec. Sep 9 22:12:10.939794 kubelet[2811]: E0909 22:12:10.939578 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:10.960717 containerd[1584]: time="2025-09-09T22:12:10.960590867Z" level=info msg="StartContainer for \"844407372477edf53f9e76735b720bb63c2476819d6d38bb9c94025b53b37aec\" returns successfully" Sep 9 22:12:11.248694 kubelet[2811]: E0909 22:12:11.248434 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:11.403421 kubelet[2811]: E0909 22:12:11.402347 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:11.944411 kubelet[2811]: E0909 22:12:11.943981 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:11.944411 kubelet[2811]: E0909 22:12:11.944007 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:11.944411 kubelet[2811]: E0909 22:12:11.944007 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:12.695439 kubelet[2811]: I0909 22:12:12.695361 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x4b22" podStartSLOduration=5.695333652 podStartE2EDuration="5.695333652s" podCreationTimestamp="2025-09-09 22:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 22:12:12.694381649 +0000 UTC m=+9.124907082" watchObservedRunningTime="2025-09-09 22:12:12.695333652 +0000 UTC m=+9.125859075" Sep 9 22:12:12.944946 kubelet[2811]: E0909 22:12:12.944912 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:26.757657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311210313.mount: Deactivated successfully. Sep 9 22:12:33.179218 containerd[1584]: time="2025-09-09T22:12:33.179105466Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:12:33.179971 containerd[1584]: time="2025-09-09T22:12:33.179865178Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 22:12:33.183781 containerd[1584]: time="2025-09-09T22:12:33.183688778Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:12:33.185472 containerd[1584]: time="2025-09-09T22:12:33.185400070Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 22.371034209s" Sep 9 22:12:33.185557 containerd[1584]: time="2025-09-09T22:12:33.185470794Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 22:12:33.193767 containerd[1584]: time="2025-09-09T22:12:33.193714765Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 22:12:33.203715 containerd[1584]: time="2025-09-09T22:12:33.203647004Z" level=info msg="CreateContainer within sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 22:12:33.216742 containerd[1584]: time="2025-09-09T22:12:33.216662617Z" level=info msg="Container a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:12:33.230324 containerd[1584]: time="2025-09-09T22:12:33.230250085Z" level=info msg="CreateContainer within sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\"" Sep 9 22:12:33.231146 containerd[1584]: time="2025-09-09T22:12:33.231110018Z" level=info msg="StartContainer for \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\"" Sep 9 22:12:33.232194 containerd[1584]: time="2025-09-09T22:12:33.232165877Z" level=info msg="connecting to shim a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d" address="unix:///run/containerd/s/5d9b23f21ad392381ef47f7d358f60101ee1707fb0d5cedd1d5845d60cc89e33" protocol=ttrpc version=3 Sep 9 22:12:33.293975 systemd[1]: Started cri-containerd-a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d.scope - libcontainer container a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d. Sep 9 22:12:33.337466 containerd[1584]: time="2025-09-09T22:12:33.337414403Z" level=info msg="StartContainer for \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\" returns successfully" Sep 9 22:12:33.348199 systemd[1]: cri-containerd-a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d.scope: Deactivated successfully. Sep 9 22:12:33.350382 containerd[1584]: time="2025-09-09T22:12:33.350338741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\" id:\"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\" pid:3208 exited_at:{seconds:1757455953 nanos:349686805}" Sep 9 22:12:33.350497 containerd[1584]: time="2025-09-09T22:12:33.350436388Z" level=info msg="received exit event container_id:\"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\" id:\"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\" pid:3208 exited_at:{seconds:1757455953 nanos:349686805}" Sep 9 22:12:33.378282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d-rootfs.mount: Deactivated successfully. Sep 9 22:12:34.138863 kubelet[2811]: E0909 22:12:34.138789 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:35.142726 kubelet[2811]: E0909 22:12:35.142663 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:35.145418 containerd[1584]: time="2025-09-09T22:12:35.145366447Z" level=info msg="CreateContainer within sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 22:12:35.298035 containerd[1584]: time="2025-09-09T22:12:35.297927324Z" level=info msg="Container 438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:12:35.301835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333090392.mount: Deactivated successfully. Sep 9 22:12:35.418343 containerd[1584]: time="2025-09-09T22:12:35.418191635Z" level=info msg="CreateContainer within sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\"" Sep 9 22:12:35.419736 containerd[1584]: time="2025-09-09T22:12:35.418886496Z" level=info msg="StartContainer for \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\"" Sep 9 22:12:35.420254 containerd[1584]: time="2025-09-09T22:12:35.420106390Z" level=info msg="connecting to shim 438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2" address="unix:///run/containerd/s/5d9b23f21ad392381ef47f7d358f60101ee1707fb0d5cedd1d5845d60cc89e33" protocol=ttrpc version=3 Sep 9 22:12:35.445925 systemd[1]: Started cri-containerd-438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2.scope - libcontainer container 438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2. Sep 9 22:12:35.483734 containerd[1584]: time="2025-09-09T22:12:35.482875590Z" level=info msg="StartContainer for \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\" returns successfully" Sep 9 22:12:35.501344 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 22:12:35.501627 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 22:12:35.502344 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 22:12:35.504399 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 22:12:35.507125 containerd[1584]: time="2025-09-09T22:12:35.507083210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\" id:\"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\" pid:3254 exited_at:{seconds:1757455955 nanos:506618320}" Sep 9 22:12:35.507208 containerd[1584]: time="2025-09-09T22:12:35.507184774Z" level=info msg="received exit event container_id:\"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\" id:\"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\" pid:3254 exited_at:{seconds:1757455955 nanos:506618320}" Sep 9 22:12:35.507447 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 22:12:35.508998 systemd[1]: cri-containerd-438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2.scope: Deactivated successfully. Sep 9 22:12:35.550561 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 22:12:36.113932 containerd[1584]: time="2025-09-09T22:12:36.113756809Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:12:36.114746 containerd[1584]: time="2025-09-09T22:12:36.114667012Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 22:12:36.116185 containerd[1584]: time="2025-09-09T22:12:36.116137528Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 22:12:36.117812 containerd[1584]: time="2025-09-09T22:12:36.117761850Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.924001998s" Sep 9 22:12:36.117812 containerd[1584]: time="2025-09-09T22:12:36.117803249Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 22:12:36.122288 containerd[1584]: time="2025-09-09T22:12:36.122224618Z" level=info msg="CreateContainer within sandbox \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 22:12:36.134600 containerd[1584]: time="2025-09-09T22:12:36.134517234Z" level=info msg="Container 7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:12:36.146015 containerd[1584]: time="2025-09-09T22:12:36.145926238Z" level=info msg="CreateContainer within sandbox \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\"" Sep 9 22:12:36.148245 containerd[1584]: time="2025-09-09T22:12:36.148188742Z" level=info msg="StartContainer for \"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\"" Sep 9 22:12:36.149435 containerd[1584]: time="2025-09-09T22:12:36.149379303Z" level=info msg="connecting to shim 7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a" address="unix:///run/containerd/s/6a408ea5545aa76b521be4e0fca6f920c42e1b3b1c94409757e7b81e296e1ce5" protocol=ttrpc version=3 Sep 9 22:12:36.149822 kubelet[2811]: E0909 22:12:36.149753 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:36.154977 containerd[1584]: time="2025-09-09T22:12:36.154920838Z" level=info msg="CreateContainer within sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 22:12:36.184429 containerd[1584]: time="2025-09-09T22:12:36.184362031Z" level=info msg="Container e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:12:36.187304 systemd[1]: Started cri-containerd-7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a.scope - libcontainer container 7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a. Sep 9 22:12:36.204560 containerd[1584]: time="2025-09-09T22:12:36.204392781Z" level=info msg="CreateContainer within sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\"" Sep 9 22:12:36.204900 containerd[1584]: time="2025-09-09T22:12:36.204850267Z" level=info msg="StartContainer for \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\"" Sep 9 22:12:36.209064 containerd[1584]: time="2025-09-09T22:12:36.209001428Z" level=info msg="connecting to shim e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d" address="unix:///run/containerd/s/5d9b23f21ad392381ef47f7d358f60101ee1707fb0d5cedd1d5845d60cc89e33" protocol=ttrpc version=3 Sep 9 22:12:36.263767 containerd[1584]: time="2025-09-09T22:12:36.263684081Z" level=info msg="StartContainer for \"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\" returns successfully" Sep 9 22:12:36.271627 systemd[1]: Started cri-containerd-e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d.scope - libcontainer container e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d. Sep 9 22:12:36.303531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2-rootfs.mount: Deactivated successfully. Sep 9 22:12:36.412808 systemd[1]: cri-containerd-e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d.scope: Deactivated successfully. Sep 9 22:12:36.414631 containerd[1584]: time="2025-09-09T22:12:36.413119653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\" id:\"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\" pid:3347 exited_at:{seconds:1757455956 nanos:412456513}" Sep 9 22:12:36.420843 containerd[1584]: time="2025-09-09T22:12:36.420775298Z" level=info msg="received exit event container_id:\"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\" id:\"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\" pid:3347 exited_at:{seconds:1757455956 nanos:412456513}" Sep 9 22:12:36.470039 containerd[1584]: time="2025-09-09T22:12:36.469970470Z" level=info msg="StartContainer for \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\" returns successfully" Sep 9 22:12:36.534940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d-rootfs.mount: Deactivated successfully. Sep 9 22:12:37.152808 kubelet[2811]: E0909 22:12:37.152769 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:37.157722 kubelet[2811]: E0909 22:12:37.157663 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:38.159693 kubelet[2811]: E0909 22:12:38.159649 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:38.160169 kubelet[2811]: E0909 22:12:38.159874 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:39.383214 kubelet[2811]: E0909 22:12:39.383169 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:39.384665 containerd[1584]: time="2025-09-09T22:12:39.384620404Z" level=info msg="CreateContainer within sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 22:12:39.904111 containerd[1584]: time="2025-09-09T22:12:39.904025971Z" level=info msg="Container 765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:12:39.908346 kubelet[2811]: I0909 22:12:39.907218 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dm46w" podStartSLOduration=7.623586235 podStartE2EDuration="32.907198729s" podCreationTimestamp="2025-09-09 22:12:07 +0000 UTC" firstStartedPulling="2025-09-09 22:12:10.836198341 +0000 UTC m=+7.266723734" lastFinishedPulling="2025-09-09 22:12:36.119810835 +0000 UTC m=+32.550336228" observedRunningTime="2025-09-09 22:12:39.906807138 +0000 UTC m=+36.337332541" watchObservedRunningTime="2025-09-09 22:12:39.907198729 +0000 UTC m=+36.337724132" Sep 9 22:12:39.910521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589043650.mount: Deactivated successfully. Sep 9 22:12:39.945652 containerd[1584]: time="2025-09-09T22:12:39.945587431Z" level=info msg="CreateContainer within sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\"" Sep 9 22:12:39.946296 containerd[1584]: time="2025-09-09T22:12:39.946245412Z" level=info msg="StartContainer for \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\"" Sep 9 22:12:39.947359 containerd[1584]: time="2025-09-09T22:12:39.947328591Z" level=info msg="connecting to shim 765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348" address="unix:///run/containerd/s/5d9b23f21ad392381ef47f7d358f60101ee1707fb0d5cedd1d5845d60cc89e33" protocol=ttrpc version=3 Sep 9 22:12:39.975019 systemd[1]: Started cri-containerd-765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348.scope - libcontainer container 765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348. Sep 9 22:12:40.038742 systemd[1]: cri-containerd-765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348.scope: Deactivated successfully. Sep 9 22:12:40.043164 containerd[1584]: time="2025-09-09T22:12:40.042026152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\" id:\"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\" pid:3393 exited_at:{seconds:1757455960 nanos:40166872}" Sep 9 22:12:40.209193 containerd[1584]: time="2025-09-09T22:12:40.209010760Z" level=info msg="received exit event container_id:\"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\" id:\"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\" pid:3393 exited_at:{seconds:1757455960 nanos:40166872}" Sep 9 22:12:40.217650 containerd[1584]: time="2025-09-09T22:12:40.217606012Z" level=info msg="StartContainer for \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\" returns successfully" Sep 9 22:12:40.241060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348-rootfs.mount: Deactivated successfully. Sep 9 22:12:40.398064 kubelet[2811]: E0909 22:12:40.398000 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:40.406435 containerd[1584]: time="2025-09-09T22:12:40.406196731Z" level=info msg="CreateContainer within sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 22:12:40.479221 containerd[1584]: time="2025-09-09T22:12:40.477355055Z" level=info msg="Container 8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:12:40.583116 containerd[1584]: time="2025-09-09T22:12:40.582614510Z" level=info msg="CreateContainer within sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\"" Sep 9 22:12:40.591242 containerd[1584]: time="2025-09-09T22:12:40.585635279Z" level=info msg="StartContainer for \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\"" Sep 9 22:12:40.591242 containerd[1584]: time="2025-09-09T22:12:40.586901380Z" level=info msg="connecting to shim 8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029" address="unix:///run/containerd/s/5d9b23f21ad392381ef47f7d358f60101ee1707fb0d5cedd1d5845d60cc89e33" protocol=ttrpc version=3 Sep 9 22:12:40.677577 systemd[1]: Started cri-containerd-8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029.scope - libcontainer container 8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029. Sep 9 22:12:40.773155 containerd[1584]: time="2025-09-09T22:12:40.772980908Z" level=info msg="StartContainer for \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" returns successfully" Sep 9 22:12:40.856321 containerd[1584]: time="2025-09-09T22:12:40.856259842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" id:\"92ed10f5f0f8493a851f6000b037fc555640c73148769c80718ed048c3e47120\" pid:3463 exited_at:{seconds:1757455960 nanos:855808736}" Sep 9 22:12:40.908722 kubelet[2811]: I0909 22:12:40.908632 2811 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 22:12:40.953988 systemd[1]: Created slice kubepods-burstable-pod04314675_ee55_4258_af41_fd75d16c568e.slice - libcontainer container kubepods-burstable-pod04314675_ee55_4258_af41_fd75d16c568e.slice. Sep 9 22:12:40.960621 systemd[1]: Created slice kubepods-burstable-pod490f5bc3_c120_4db5_af75_41fa07464bf0.slice - libcontainer container kubepods-burstable-pod490f5bc3_c120_4db5_af75_41fa07464bf0.slice. Sep 9 22:12:40.992661 kubelet[2811]: I0909 22:12:40.992553 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/490f5bc3-c120-4db5-af75-41fa07464bf0-config-volume\") pod \"coredns-7c65d6cfc9-dgwjl\" (UID: \"490f5bc3-c120-4db5-af75-41fa07464bf0\") " pod="kube-system/coredns-7c65d6cfc9-dgwjl" Sep 9 22:12:40.992661 kubelet[2811]: I0909 22:12:40.992630 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mkcc\" (UniqueName: \"kubernetes.io/projected/490f5bc3-c120-4db5-af75-41fa07464bf0-kube-api-access-8mkcc\") pod \"coredns-7c65d6cfc9-dgwjl\" (UID: \"490f5bc3-c120-4db5-af75-41fa07464bf0\") " pod="kube-system/coredns-7c65d6cfc9-dgwjl" Sep 9 22:12:40.992661 kubelet[2811]: I0909 22:12:40.992674 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbzd\" (UniqueName: \"kubernetes.io/projected/04314675-ee55-4258-af41-fd75d16c568e-kube-api-access-zgbzd\") pod \"coredns-7c65d6cfc9-9l9k8\" (UID: \"04314675-ee55-4258-af41-fd75d16c568e\") " pod="kube-system/coredns-7c65d6cfc9-9l9k8" Sep 9 22:12:40.993040 kubelet[2811]: I0909 22:12:40.992748 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04314675-ee55-4258-af41-fd75d16c568e-config-volume\") pod \"coredns-7c65d6cfc9-9l9k8\" (UID: \"04314675-ee55-4258-af41-fd75d16c568e\") " pod="kube-system/coredns-7c65d6cfc9-9l9k8" Sep 9 22:12:41.259871 kubelet[2811]: E0909 22:12:41.259075 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:41.263796 containerd[1584]: time="2025-09-09T22:12:41.263530894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9l9k8,Uid:04314675-ee55-4258-af41-fd75d16c568e,Namespace:kube-system,Attempt:0,}" Sep 9 22:12:41.272206 kubelet[2811]: E0909 22:12:41.267090 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:41.276379 containerd[1584]: time="2025-09-09T22:12:41.275810482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dgwjl,Uid:490f5bc3-c120-4db5-af75-41fa07464bf0,Namespace:kube-system,Attempt:0,}" Sep 9 22:12:41.447268 kubelet[2811]: E0909 22:12:41.445096 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:41.583477 kubelet[2811]: I0909 22:12:41.583302 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-txwlg" podStartSLOduration=12.20031851 podStartE2EDuration="34.583254204s" podCreationTimestamp="2025-09-09 22:12:07 +0000 UTC" firstStartedPulling="2025-09-09 22:12:10.809788929 +0000 UTC m=+7.240314332" lastFinishedPulling="2025-09-09 22:12:33.192724622 +0000 UTC m=+29.623250026" observedRunningTime="2025-09-09 22:12:41.583019193 +0000 UTC m=+38.013544596" watchObservedRunningTime="2025-09-09 22:12:41.583254204 +0000 UTC m=+38.013779607" Sep 9 22:12:42.247976 containerd[1584]: time="2025-09-09T22:12:42.247880182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" id:\"5f67713d7574f5cace4935fd193dabfdb9d3d1d1c765e2e8c74432b5a07b2293\" pid:3573 exit_status:1 exited_at:{seconds:1757455962 nanos:247211025}" Sep 9 22:12:42.445580 kubelet[2811]: E0909 22:12:42.445529 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:43.368126 systemd-networkd[1474]: cilium_host: Link UP Sep 9 22:12:43.368357 systemd-networkd[1474]: cilium_net: Link UP Sep 9 22:12:43.368545 systemd-networkd[1474]: cilium_net: Gained carrier Sep 9 22:12:43.368741 systemd-networkd[1474]: cilium_host: Gained carrier Sep 9 22:12:43.534340 systemd-networkd[1474]: cilium_vxlan: Link UP Sep 9 22:12:43.534356 systemd-networkd[1474]: cilium_vxlan: Gained carrier Sep 9 22:12:43.645159 systemd-networkd[1474]: cilium_host: Gained IPv6LL Sep 9 22:12:43.915756 kernel: NET: Registered PF_ALG protocol family Sep 9 22:12:43.932688 systemd-networkd[1474]: cilium_net: Gained IPv6LL Sep 9 22:12:44.467447 containerd[1584]: time="2025-09-09T22:12:44.467083195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" id:\"6830d633b468b47e2e29e954e509a2bee298657b297e991af277af867738b53d\" pid:3721 exit_status:1 exited_at:{seconds:1757455964 nanos:466453383}" Sep 9 22:12:44.629970 systemd-networkd[1474]: cilium_vxlan: Gained IPv6LL Sep 9 22:12:45.486609 kubelet[2811]: E0909 22:12:45.486170 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:45.603936 systemd-networkd[1474]: lxc_health: Link UP Sep 9 22:12:45.605532 systemd-networkd[1474]: lxc_health: Gained carrier Sep 9 22:12:45.978044 systemd-networkd[1474]: lxc4b596135b7d5: Link UP Sep 9 22:12:46.032400 kernel: eth0: renamed from tmp81442 Sep 9 22:12:46.035055 systemd-networkd[1474]: lxc4b596135b7d5: Gained carrier Sep 9 22:12:46.062892 kernel: eth0: renamed from tmp536fd Sep 9 22:12:46.062990 systemd-networkd[1474]: lxc01d80ba8b0d6: Link UP Sep 9 22:12:46.064358 systemd-networkd[1474]: lxc01d80ba8b0d6: Gained carrier Sep 9 22:12:46.475561 kubelet[2811]: E0909 22:12:46.475494 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:46.971853 containerd[1584]: time="2025-09-09T22:12:46.971752339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" id:\"7040c6bb268ae46f11186ed81124285af085f10a9b40bcb05b8a3cc1b18eaf4c\" pid:3990 exited_at:{seconds:1757455966 nanos:970020755}" Sep 9 22:12:46.990126 kubelet[2811]: E0909 22:12:46.989221 2811 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51860->127.0.0.1:45855: write tcp 127.0.0.1:51860->127.0.0.1:45855: write: broken pipe Sep 9 22:12:47.126636 systemd-networkd[1474]: lxc_health: Gained IPv6LL Sep 9 22:12:47.190818 systemd-networkd[1474]: lxc4b596135b7d5: Gained IPv6LL Sep 9 22:12:47.497102 kubelet[2811]: E0909 22:12:47.497039 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:47.640433 systemd-networkd[1474]: lxc01d80ba8b0d6: Gained IPv6LL Sep 9 22:12:49.235353 containerd[1584]: time="2025-09-09T22:12:49.235277092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" id:\"1a7437975dd76393096680b1a91196b11c3df26564a28aedd4de0f9821df638b\" pid:4018 exited_at:{seconds:1757455969 nanos:232962599}" Sep 9 22:12:51.437913 containerd[1584]: time="2025-09-09T22:12:51.437840854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" id:\"cc74fe222cb121715e4da64942924919d5b46249eb6756d2abe1e4a25d1431fb\" pid:4046 exited_at:{seconds:1757455971 nanos:437412518}" Sep 9 22:12:52.073194 containerd[1584]: time="2025-09-09T22:12:52.072285426Z" level=info msg="connecting to shim 81442eb64f2b7a5a67e16aab5cd666e984c9a5f2d802e7c4f343070a585e3c4c" address="unix:///run/containerd/s/9d69a0b01b28b60f0f66c1bddb39d34f3a07ff09f9a93824ba86e6af122313dc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 22:12:52.074781 containerd[1584]: time="2025-09-09T22:12:52.074654779Z" level=info msg="connecting to shim 536fd630996a37b8dab1bfbcafbde2338f86ba36e6284a78901a78d22a996eac" address="unix:///run/containerd/s/3e43857955d64c3e62336fafa0b1cc9c914c38b9748a9f2b3f12093283d1b4a3" namespace=k8s.io protocol=ttrpc version=3 Sep 9 22:12:52.107999 systemd[1]: Started cri-containerd-81442eb64f2b7a5a67e16aab5cd666e984c9a5f2d802e7c4f343070a585e3c4c.scope - libcontainer container 81442eb64f2b7a5a67e16aab5cd666e984c9a5f2d802e7c4f343070a585e3c4c. Sep 9 22:12:52.125106 systemd[1]: Started cri-containerd-536fd630996a37b8dab1bfbcafbde2338f86ba36e6284a78901a78d22a996eac.scope - libcontainer container 536fd630996a37b8dab1bfbcafbde2338f86ba36e6284a78901a78d22a996eac. Sep 9 22:12:52.131948 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 22:12:52.147488 systemd-resolved[1413]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 22:12:52.184008 containerd[1584]: time="2025-09-09T22:12:52.183938451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dgwjl,Uid:490f5bc3-c120-4db5-af75-41fa07464bf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"81442eb64f2b7a5a67e16aab5cd666e984c9a5f2d802e7c4f343070a585e3c4c\"" Sep 9 22:12:52.185285 kubelet[2811]: E0909 22:12:52.184806 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:52.196822 containerd[1584]: time="2025-09-09T22:12:52.196690954Z" level=info msg="CreateContainer within sandbox \"81442eb64f2b7a5a67e16aab5cd666e984c9a5f2d802e7c4f343070a585e3c4c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 22:12:52.211134 containerd[1584]: time="2025-09-09T22:12:52.211008319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9l9k8,Uid:04314675-ee55-4258-af41-fd75d16c568e,Namespace:kube-system,Attempt:0,} returns sandbox id \"536fd630996a37b8dab1bfbcafbde2338f86ba36e6284a78901a78d22a996eac\"" Sep 9 22:12:52.212589 kubelet[2811]: E0909 22:12:52.212526 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:52.215945 containerd[1584]: time="2025-09-09T22:12:52.215863850Z" level=info msg="CreateContainer within sandbox \"536fd630996a37b8dab1bfbcafbde2338f86ba36e6284a78901a78d22a996eac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 22:12:52.236506 containerd[1584]: time="2025-09-09T22:12:52.236437951Z" level=info msg="Container 1c0583f16f922e379e2539d93ea875fb83755f0a87ea68d45843f63f4473bc72: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:12:52.254515 containerd[1584]: time="2025-09-09T22:12:52.254433685Z" level=info msg="CreateContainer within sandbox \"81442eb64f2b7a5a67e16aab5cd666e984c9a5f2d802e7c4f343070a585e3c4c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c0583f16f922e379e2539d93ea875fb83755f0a87ea68d45843f63f4473bc72\"" Sep 9 22:12:52.256272 containerd[1584]: time="2025-09-09T22:12:52.256159145Z" level=info msg="StartContainer for \"1c0583f16f922e379e2539d93ea875fb83755f0a87ea68d45843f63f4473bc72\"" Sep 9 22:12:52.257247 containerd[1584]: time="2025-09-09T22:12:52.257162261Z" level=info msg="connecting to shim 1c0583f16f922e379e2539d93ea875fb83755f0a87ea68d45843f63f4473bc72" address="unix:///run/containerd/s/9d69a0b01b28b60f0f66c1bddb39d34f3a07ff09f9a93824ba86e6af122313dc" protocol=ttrpc version=3 Sep 9 22:12:52.259678 containerd[1584]: time="2025-09-09T22:12:52.259572633Z" level=info msg="Container 0220cd43998b892811a8e44e5fe6e950489e722af90e0a9b0a9ec8c07b54da8d: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:12:52.274451 containerd[1584]: time="2025-09-09T22:12:52.274342001Z" level=info msg="CreateContainer within sandbox \"536fd630996a37b8dab1bfbcafbde2338f86ba36e6284a78901a78d22a996eac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0220cd43998b892811a8e44e5fe6e950489e722af90e0a9b0a9ec8c07b54da8d\"" Sep 9 22:12:52.276857 containerd[1584]: time="2025-09-09T22:12:52.275259611Z" level=info msg="StartContainer for \"0220cd43998b892811a8e44e5fe6e950489e722af90e0a9b0a9ec8c07b54da8d\"" Sep 9 22:12:52.276857 containerd[1584]: time="2025-09-09T22:12:52.276229103Z" level=info msg="connecting to shim 0220cd43998b892811a8e44e5fe6e950489e722af90e0a9b0a9ec8c07b54da8d" address="unix:///run/containerd/s/3e43857955d64c3e62336fafa0b1cc9c914c38b9748a9f2b3f12093283d1b4a3" protocol=ttrpc version=3 Sep 9 22:12:52.297116 systemd[1]: Started cri-containerd-1c0583f16f922e379e2539d93ea875fb83755f0a87ea68d45843f63f4473bc72.scope - libcontainer container 1c0583f16f922e379e2539d93ea875fb83755f0a87ea68d45843f63f4473bc72. Sep 9 22:12:52.302940 systemd[1]: Started cri-containerd-0220cd43998b892811a8e44e5fe6e950489e722af90e0a9b0a9ec8c07b54da8d.scope - libcontainer container 0220cd43998b892811a8e44e5fe6e950489e722af90e0a9b0a9ec8c07b54da8d. Sep 9 22:12:52.376861 containerd[1584]: time="2025-09-09T22:12:52.376636020Z" level=info msg="StartContainer for \"0220cd43998b892811a8e44e5fe6e950489e722af90e0a9b0a9ec8c07b54da8d\" returns successfully" Sep 9 22:12:52.383009 containerd[1584]: time="2025-09-09T22:12:52.382934658Z" level=info msg="StartContainer for \"1c0583f16f922e379e2539d93ea875fb83755f0a87ea68d45843f63f4473bc72\" returns successfully" Sep 9 22:12:52.510997 kubelet[2811]: E0909 22:12:52.510348 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:52.516791 sudo[1804]: pam_unix(sudo:session): session closed for user root Sep 9 22:12:52.518612 kubelet[2811]: E0909 22:12:52.517361 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:52.531863 sshd[1803]: Connection closed by 10.0.0.1 port 55090 Sep 9 22:12:52.538019 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Sep 9 22:12:52.547559 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Sep 9 22:12:52.551266 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:55090.service: Deactivated successfully. Sep 9 22:12:52.556602 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 22:12:52.557030 systemd[1]: session-7.scope: Consumed 9.182s CPU time, 233.1M memory peak. Sep 9 22:12:52.562633 systemd-logind[1563]: Removed session 7. Sep 9 22:12:52.796852 kubelet[2811]: I0909 22:12:52.796653 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9l9k8" podStartSLOduration=45.796618679 podStartE2EDuration="45.796618679s" podCreationTimestamp="2025-09-09 22:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 22:12:52.540403772 +0000 UTC m=+48.970929195" watchObservedRunningTime="2025-09-09 22:12:52.796618679 +0000 UTC m=+49.227144082" Sep 9 22:12:53.041379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3790709809.mount: Deactivated successfully. Sep 9 22:12:53.521627 kubelet[2811]: E0909 22:12:53.519743 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:12:53.695345 kubelet[2811]: I0909 22:12:53.693585 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dgwjl" podStartSLOduration=46.693555973 podStartE2EDuration="46.693555973s" podCreationTimestamp="2025-09-09 22:12:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 22:12:52.795862559 +0000 UTC m=+49.226387962" watchObservedRunningTime="2025-09-09 22:12:53.693555973 +0000 UTC m=+50.124081376" Sep 9 22:12:54.526773 kubelet[2811]: E0909 22:12:54.524621 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:13:01.268416 kubelet[2811]: E0909 22:13:01.268300 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:13:01.565336 kubelet[2811]: E0909 22:13:01.561921 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:13:17.865945 kubelet[2811]: E0909 22:13:17.865867 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:13:22.754082 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:44916.service - OpenSSH per-connection server daemon (10.0.0.1:44916). Sep 9 22:13:22.835931 sshd[4266]: Accepted publickey for core from 10.0.0.1 port 44916 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:13:22.838074 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:13:22.843681 systemd-logind[1563]: New session 8 of user core. Sep 9 22:13:22.854924 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 22:13:23.096763 sshd[4269]: Connection closed by 10.0.0.1 port 44916 Sep 9 22:13:23.097151 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Sep 9 22:13:23.101645 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:44916.service: Deactivated successfully. Sep 9 22:13:23.103824 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 22:13:23.104685 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Sep 9 22:13:23.105935 systemd-logind[1563]: Removed session 8. Sep 9 22:13:25.868006 kubelet[2811]: E0909 22:13:25.867836 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:13:28.119570 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:44966.service - OpenSSH per-connection server daemon (10.0.0.1:44966). Sep 9 22:13:28.176873 sshd[4290]: Accepted publickey for core from 10.0.0.1 port 44966 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:13:28.178572 sshd-session[4290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:13:28.184545 systemd-logind[1563]: New session 9 of user core. Sep 9 22:13:28.199039 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 22:13:28.320354 sshd[4293]: Connection closed by 10.0.0.1 port 44966 Sep 9 22:13:28.320793 sshd-session[4290]: pam_unix(sshd:session): session closed for user core Sep 9 22:13:28.324943 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:44966.service: Deactivated successfully. Sep 9 22:13:28.327238 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 22:13:28.330227 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Sep 9 22:13:28.331493 systemd-logind[1563]: Removed session 9. Sep 9 22:13:31.866370 kubelet[2811]: E0909 22:13:31.866126 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:13:33.337928 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:46808.service - OpenSSH per-connection server daemon (10.0.0.1:46808). Sep 9 22:13:33.396252 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 46808 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:13:33.398109 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:13:33.403133 systemd-logind[1563]: New session 10 of user core. Sep 9 22:13:33.412925 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 22:13:33.547232 sshd[4311]: Connection closed by 10.0.0.1 port 46808 Sep 9 22:13:33.547608 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Sep 9 22:13:33.552750 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:46808.service: Deactivated successfully. Sep 9 22:13:33.555212 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 22:13:33.556291 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Sep 9 22:13:33.557967 systemd-logind[1563]: Removed session 10. Sep 9 22:13:38.566616 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:46842.service - OpenSSH per-connection server daemon (10.0.0.1:46842). Sep 9 22:13:38.650626 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 46842 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:13:38.653037 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:13:38.666508 systemd-logind[1563]: New session 11 of user core. Sep 9 22:13:38.676061 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 22:13:38.812110 sshd[4329]: Connection closed by 10.0.0.1 port 46842 Sep 9 22:13:38.812558 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Sep 9 22:13:38.819284 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:46842.service: Deactivated successfully. Sep 9 22:13:38.822319 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 22:13:38.824921 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Sep 9 22:13:38.826604 systemd-logind[1563]: Removed session 11. Sep 9 22:13:41.866466 kubelet[2811]: E0909 22:13:41.866418 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:13:43.829724 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:56028.service - OpenSSH per-connection server daemon (10.0.0.1:56028). Sep 9 22:13:43.899226 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 56028 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:13:43.900930 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:13:43.913875 systemd-logind[1563]: New session 12 of user core. Sep 9 22:13:43.932548 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 22:13:44.145941 sshd[4349]: Connection closed by 10.0.0.1 port 56028 Sep 9 22:13:44.146697 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Sep 9 22:13:44.152386 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:56028.service: Deactivated successfully. Sep 9 22:13:44.154634 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 22:13:44.155514 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Sep 9 22:13:44.157153 systemd-logind[1563]: Removed session 12. Sep 9 22:13:47.869137 kubelet[2811]: E0909 22:13:47.868949 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:13:49.160851 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:56034.service - OpenSSH per-connection server daemon (10.0.0.1:56034). Sep 9 22:13:49.226663 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 56034 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:13:49.228682 sshd-session[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:13:49.235212 systemd-logind[1563]: New session 13 of user core. Sep 9 22:13:49.244920 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 22:13:49.377240 sshd[4366]: Connection closed by 10.0.0.1 port 56034 Sep 9 22:13:49.377862 sshd-session[4363]: pam_unix(sshd:session): session closed for user core Sep 9 22:13:49.384177 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:56034.service: Deactivated successfully. Sep 9 22:13:49.386693 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 22:13:49.387907 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Sep 9 22:13:49.389736 systemd-logind[1563]: Removed session 13. Sep 9 22:13:54.401233 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:49940.service - OpenSSH per-connection server daemon (10.0.0.1:49940). Sep 9 22:13:54.465043 sshd[4381]: Accepted publickey for core from 10.0.0.1 port 49940 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:13:54.467256 sshd-session[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:13:54.473987 systemd-logind[1563]: New session 14 of user core. Sep 9 22:13:54.485018 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 22:13:54.626567 sshd[4384]: Connection closed by 10.0.0.1 port 49940 Sep 9 22:13:54.627066 sshd-session[4381]: pam_unix(sshd:session): session closed for user core Sep 9 22:13:54.638166 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:49940.service: Deactivated successfully. Sep 9 22:13:54.640895 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 22:13:54.642118 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Sep 9 22:13:54.646669 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:49956.service - OpenSSH per-connection server daemon (10.0.0.1:49956). Sep 9 22:13:54.647647 systemd-logind[1563]: Removed session 14. Sep 9 22:13:54.716136 sshd[4398]: Accepted publickey for core from 10.0.0.1 port 49956 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:13:54.719388 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:13:54.726252 systemd-logind[1563]: New session 15 of user core. Sep 9 22:13:54.743017 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 22:13:54.917416 sshd[4401]: Connection closed by 10.0.0.1 port 49956 Sep 9 22:13:54.918788 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Sep 9 22:13:54.933914 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:49956.service: Deactivated successfully. Sep 9 22:13:54.938600 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 22:13:54.941364 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Sep 9 22:13:54.949018 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:49960.service - OpenSSH per-connection server daemon (10.0.0.1:49960). Sep 9 22:13:54.950633 systemd-logind[1563]: Removed session 15. Sep 9 22:13:55.005040 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 49960 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:13:55.007579 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:13:55.015558 systemd-logind[1563]: New session 16 of user core. Sep 9 22:13:55.025054 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 22:13:55.187230 sshd[4416]: Connection closed by 10.0.0.1 port 49960 Sep 9 22:13:55.187667 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Sep 9 22:13:55.194380 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:49960.service: Deactivated successfully. Sep 9 22:13:55.197006 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 22:13:55.198131 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Sep 9 22:13:55.199896 systemd-logind[1563]: Removed session 16. Sep 9 22:14:00.207063 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:43330.service - OpenSSH per-connection server daemon (10.0.0.1:43330). Sep 9 22:14:00.262093 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 43330 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:00.264241 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:00.269814 systemd-logind[1563]: New session 17 of user core. Sep 9 22:14:00.275898 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 22:14:00.397343 sshd[4432]: Connection closed by 10.0.0.1 port 43330 Sep 9 22:14:00.398988 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:00.405081 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:43330.service: Deactivated successfully. Sep 9 22:14:00.408121 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 22:14:00.410448 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Sep 9 22:14:00.412059 systemd-logind[1563]: Removed session 17. Sep 9 22:14:05.419256 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:43338.service - OpenSSH per-connection server daemon (10.0.0.1:43338). Sep 9 22:14:05.479432 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 43338 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:05.481252 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:05.485878 systemd-logind[1563]: New session 18 of user core. Sep 9 22:14:05.493833 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 22:14:05.656618 sshd[4450]: Connection closed by 10.0.0.1 port 43338 Sep 9 22:14:05.657059 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:05.661677 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:43338.service: Deactivated successfully. Sep 9 22:14:05.664028 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 22:14:05.665759 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Sep 9 22:14:05.667466 systemd-logind[1563]: Removed session 18. Sep 9 22:14:08.868333 kubelet[2811]: E0909 22:14:08.868257 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:09.867145 kubelet[2811]: E0909 22:14:09.866513 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:10.679420 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:37872.service - OpenSSH per-connection server daemon (10.0.0.1:37872). Sep 9 22:14:10.765130 sshd[4463]: Accepted publickey for core from 10.0.0.1 port 37872 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:10.767457 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:10.781596 systemd-logind[1563]: New session 19 of user core. Sep 9 22:14:10.793064 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 22:14:10.961607 sshd[4466]: Connection closed by 10.0.0.1 port 37872 Sep 9 22:14:10.961472 sshd-session[4463]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:10.976523 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:37872.service: Deactivated successfully. Sep 9 22:14:10.979387 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 22:14:10.981007 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Sep 9 22:14:10.985248 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:37880.service - OpenSSH per-connection server daemon (10.0.0.1:37880). Sep 9 22:14:10.992785 systemd-logind[1563]: Removed session 19. Sep 9 22:14:11.058058 sshd[4479]: Accepted publickey for core from 10.0.0.1 port 37880 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:11.060762 sshd-session[4479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:11.071129 systemd-logind[1563]: New session 20 of user core. Sep 9 22:14:11.089578 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 22:14:11.676728 sshd[4482]: Connection closed by 10.0.0.1 port 37880 Sep 9 22:14:11.676008 sshd-session[4479]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:11.728489 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:37880.service: Deactivated successfully. Sep 9 22:14:11.733356 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 22:14:11.747505 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Sep 9 22:14:11.757309 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:37886.service - OpenSSH per-connection server daemon (10.0.0.1:37886). Sep 9 22:14:11.768999 systemd-logind[1563]: Removed session 20. Sep 9 22:14:11.934429 sshd[4495]: Accepted publickey for core from 10.0.0.1 port 37886 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:11.938118 sshd-session[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:11.955524 systemd-logind[1563]: New session 21 of user core. Sep 9 22:14:11.973870 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 22:14:15.866479 kubelet[2811]: E0909 22:14:15.866392 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:16.123074 sshd[4498]: Connection closed by 10.0.0.1 port 37886 Sep 9 22:14:16.124048 sshd-session[4495]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:16.133416 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:37886.service: Deactivated successfully. Sep 9 22:14:16.136254 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 22:14:16.137585 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Sep 9 22:14:16.143354 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:37890.service - OpenSSH per-connection server daemon (10.0.0.1:37890). Sep 9 22:14:16.144640 systemd-logind[1563]: Removed session 21. Sep 9 22:14:16.220984 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 37890 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:16.223352 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:16.230396 systemd-logind[1563]: New session 22 of user core. Sep 9 22:14:16.240037 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 22:14:16.602559 sshd[4536]: Connection closed by 10.0.0.1 port 37890 Sep 9 22:14:16.603450 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:16.625601 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:37890.service: Deactivated successfully. Sep 9 22:14:16.629036 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 22:14:16.632459 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Sep 9 22:14:16.636985 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:37898.service - OpenSSH per-connection server daemon (10.0.0.1:37898). Sep 9 22:14:16.638129 systemd-logind[1563]: Removed session 22. Sep 9 22:14:16.709095 sshd[4547]: Accepted publickey for core from 10.0.0.1 port 37898 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:16.711739 sshd-session[4547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:16.721744 systemd-logind[1563]: New session 23 of user core. Sep 9 22:14:16.739131 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 22:14:16.883315 sshd[4550]: Connection closed by 10.0.0.1 port 37898 Sep 9 22:14:16.882597 sshd-session[4547]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:16.890233 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:37898.service: Deactivated successfully. Sep 9 22:14:16.892717 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 22:14:16.893905 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Sep 9 22:14:16.895540 systemd-logind[1563]: Removed session 23. Sep 9 22:14:21.904075 systemd[1]: Started sshd@24-10.0.0.117:22-10.0.0.1:48590.service - OpenSSH per-connection server daemon (10.0.0.1:48590). Sep 9 22:14:21.987691 sshd[4564]: Accepted publickey for core from 10.0.0.1 port 48590 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:21.995256 sshd-session[4564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:22.015172 systemd-logind[1563]: New session 24 of user core. Sep 9 22:14:22.029298 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 22:14:22.228413 sshd[4567]: Connection closed by 10.0.0.1 port 48590 Sep 9 22:14:22.228623 sshd-session[4564]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:22.242462 systemd[1]: sshd@24-10.0.0.117:22-10.0.0.1:48590.service: Deactivated successfully. Sep 9 22:14:22.245365 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 22:14:22.247053 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Sep 9 22:14:22.251970 systemd-logind[1563]: Removed session 24. Sep 9 22:14:27.248003 systemd[1]: Started sshd@25-10.0.0.117:22-10.0.0.1:48606.service - OpenSSH per-connection server daemon (10.0.0.1:48606). Sep 9 22:14:27.412631 sshd[4583]: Accepted publickey for core from 10.0.0.1 port 48606 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:27.412332 sshd-session[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:27.469456 systemd-logind[1563]: New session 25 of user core. Sep 9 22:14:27.495328 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 22:14:27.775182 sshd[4587]: Connection closed by 10.0.0.1 port 48606 Sep 9 22:14:27.776370 sshd-session[4583]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:27.793359 systemd[1]: sshd@25-10.0.0.117:22-10.0.0.1:48606.service: Deactivated successfully. Sep 9 22:14:27.798132 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 22:14:27.823870 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Sep 9 22:14:27.826511 systemd-logind[1563]: Removed session 25. Sep 9 22:14:32.806384 systemd[1]: Started sshd@26-10.0.0.117:22-10.0.0.1:33776.service - OpenSSH per-connection server daemon (10.0.0.1:33776). Sep 9 22:14:32.876299 sshd[4603]: Accepted publickey for core from 10.0.0.1 port 33776 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:32.878508 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:32.885479 systemd-logind[1563]: New session 26 of user core. Sep 9 22:14:32.900012 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 22:14:33.016942 sshd[4606]: Connection closed by 10.0.0.1 port 33776 Sep 9 22:14:33.017477 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:33.022600 systemd[1]: sshd@26-10.0.0.117:22-10.0.0.1:33776.service: Deactivated successfully. Sep 9 22:14:33.025560 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 22:14:33.026662 systemd-logind[1563]: Session 26 logged out. Waiting for processes to exit. Sep 9 22:14:33.028294 systemd-logind[1563]: Removed session 26. Sep 9 22:14:33.866624 kubelet[2811]: E0909 22:14:33.866545 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:36.866754 kubelet[2811]: E0909 22:14:36.866669 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:38.048952 systemd[1]: Started sshd@27-10.0.0.117:22-10.0.0.1:33778.service - OpenSSH per-connection server daemon (10.0.0.1:33778). Sep 9 22:14:38.155893 sshd[4620]: Accepted publickey for core from 10.0.0.1 port 33778 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:38.160490 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:38.182255 systemd-logind[1563]: New session 27 of user core. Sep 9 22:14:38.192073 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 22:14:38.554592 sshd[4623]: Connection closed by 10.0.0.1 port 33778 Sep 9 22:14:38.553250 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:38.566808 systemd[1]: sshd@27-10.0.0.117:22-10.0.0.1:33778.service: Deactivated successfully. Sep 9 22:14:38.572563 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 22:14:38.575257 systemd-logind[1563]: Session 27 logged out. Waiting for processes to exit. Sep 9 22:14:38.578695 systemd-logind[1563]: Removed session 27. Sep 9 22:14:43.611209 systemd[1]: Started sshd@28-10.0.0.117:22-10.0.0.1:41082.service - OpenSSH per-connection server daemon (10.0.0.1:41082). Sep 9 22:14:43.793550 sshd[4638]: Accepted publickey for core from 10.0.0.1 port 41082 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:43.803594 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:43.825551 systemd-logind[1563]: New session 28 of user core. Sep 9 22:14:43.854463 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 22:14:44.301882 sshd[4641]: Connection closed by 10.0.0.1 port 41082 Sep 9 22:14:44.300139 sshd-session[4638]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:44.349181 systemd[1]: sshd@28-10.0.0.117:22-10.0.0.1:41082.service: Deactivated successfully. Sep 9 22:14:44.352301 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 22:14:44.368815 systemd-logind[1563]: Session 28 logged out. Waiting for processes to exit. Sep 9 22:14:44.380845 systemd[1]: Started sshd@29-10.0.0.117:22-10.0.0.1:41086.service - OpenSSH per-connection server daemon (10.0.0.1:41086). Sep 9 22:14:44.383867 systemd-logind[1563]: Removed session 28. Sep 9 22:14:44.521047 sshd[4655]: Accepted publickey for core from 10.0.0.1 port 41086 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:44.527987 sshd-session[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:44.543827 systemd-logind[1563]: New session 29 of user core. Sep 9 22:14:44.569224 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 22:14:46.596109 containerd[1584]: time="2025-09-09T22:14:46.594938123Z" level=info msg="StopContainer for \"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\" with timeout 30 (s)" Sep 9 22:14:46.646334 containerd[1584]: time="2025-09-09T22:14:46.645338534Z" level=info msg="Stop container \"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\" with signal terminated" Sep 9 22:14:46.686463 systemd[1]: cri-containerd-7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a.scope: Deactivated successfully. Sep 9 22:14:46.698403 containerd[1584]: time="2025-09-09T22:14:46.698196002Z" level=info msg="received exit event container_id:\"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\" id:\"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\" pid:3318 exited_at:{seconds:1757456086 nanos:697476882}" Sep 9 22:14:46.722945 containerd[1584]: time="2025-09-09T22:14:46.722807763Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\" id:\"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\" pid:3318 exited_at:{seconds:1757456086 nanos:697476882}" Sep 9 22:14:46.746102 containerd[1584]: time="2025-09-09T22:14:46.744820812Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 22:14:46.749195 containerd[1584]: time="2025-09-09T22:14:46.748306129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" id:\"5ea118af979b29cc9e9366ec2026e3b5fc2925e917a1c1bee2e04a78058d66eb\" pid:4679 exited_at:{seconds:1757456086 nanos:746699321}" Sep 9 22:14:46.752229 containerd[1584]: time="2025-09-09T22:14:46.752174947Z" level=info msg="StopContainer for \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" with timeout 2 (s)" Sep 9 22:14:46.752910 containerd[1584]: time="2025-09-09T22:14:46.752875072Z" level=info msg="Stop container \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" with signal terminated" Sep 9 22:14:46.756421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a-rootfs.mount: Deactivated successfully. Sep 9 22:14:46.787283 systemd-networkd[1474]: lxc_health: Link DOWN Sep 9 22:14:46.787292 systemd-networkd[1474]: lxc_health: Lost carrier Sep 9 22:14:46.824208 containerd[1584]: time="2025-09-09T22:14:46.824127522Z" level=info msg="StopContainer for \"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\" returns successfully" Sep 9 22:14:46.831358 containerd[1584]: time="2025-09-09T22:14:46.830960387Z" level=info msg="StopPodSandbox for \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\"" Sep 9 22:14:46.831358 containerd[1584]: time="2025-09-09T22:14:46.831087646Z" level=info msg="Container to stop \"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 22:14:46.853616 systemd[1]: cri-containerd-8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029.scope: Deactivated successfully. Sep 9 22:14:46.854156 systemd[1]: cri-containerd-8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029.scope: Consumed 10.237s CPU time, 143.3M memory peak, 608K read from disk, 13.3M written to disk. Sep 9 22:14:46.863463 containerd[1584]: time="2025-09-09T22:14:46.858260178Z" level=info msg="received exit event container_id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" pid:3433 exited_at:{seconds:1757456086 nanos:858028362}" Sep 9 22:14:46.863687 containerd[1584]: time="2025-09-09T22:14:46.862267826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" id:\"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" pid:3433 exited_at:{seconds:1757456086 nanos:858028362}" Sep 9 22:14:46.873543 systemd[1]: cri-containerd-34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605.scope: Deactivated successfully. Sep 9 22:14:46.878297 containerd[1584]: time="2025-09-09T22:14:46.878256168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" id:\"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" pid:2974 exit_status:137 exited_at:{seconds:1757456086 nanos:877358502}" Sep 9 22:14:46.968625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029-rootfs.mount: Deactivated successfully. Sep 9 22:14:47.012680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605-rootfs.mount: Deactivated successfully. Sep 9 22:14:47.030622 containerd[1584]: time="2025-09-09T22:14:47.029768969Z" level=info msg="shim disconnected" id=34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605 namespace=k8s.io Sep 9 22:14:47.030622 containerd[1584]: time="2025-09-09T22:14:47.029804385Z" level=warning msg="cleaning up after shim disconnected" id=34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605 namespace=k8s.io Sep 9 22:14:47.065356 containerd[1584]: time="2025-09-09T22:14:47.029812951Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 22:14:47.065356 containerd[1584]: time="2025-09-09T22:14:47.050733058Z" level=info msg="StopContainer for \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" returns successfully" Sep 9 22:14:47.073259 containerd[1584]: time="2025-09-09T22:14:47.072208829Z" level=info msg="StopPodSandbox for \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\"" Sep 9 22:14:47.074126 containerd[1584]: time="2025-09-09T22:14:47.073685404Z" level=info msg="Container to stop \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 22:14:47.075794 containerd[1584]: time="2025-09-09T22:14:47.075748492Z" level=info msg="Container to stop \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 22:14:47.075950 containerd[1584]: time="2025-09-09T22:14:47.075930774Z" level=info msg="Container to stop \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 22:14:47.076059 containerd[1584]: time="2025-09-09T22:14:47.076025011Z" level=info msg="Container to stop \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 22:14:47.076157 containerd[1584]: time="2025-09-09T22:14:47.076140398Z" level=info msg="Container to stop \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 22:14:47.106286 systemd[1]: cri-containerd-9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788.scope: Deactivated successfully. Sep 9 22:14:47.225972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788-rootfs.mount: Deactivated successfully. Sep 9 22:14:47.247234 containerd[1584]: time="2025-09-09T22:14:47.247088553Z" level=info msg="shim disconnected" id=9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788 namespace=k8s.io Sep 9 22:14:47.247234 containerd[1584]: time="2025-09-09T22:14:47.247137495Z" level=warning msg="cleaning up after shim disconnected" id=9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788 namespace=k8s.io Sep 9 22:14:47.247234 containerd[1584]: time="2025-09-09T22:14:47.247148375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 22:14:47.251924 containerd[1584]: time="2025-09-09T22:14:47.251223934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" id:\"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" pid:2976 exit_status:137 exited_at:{seconds:1757456087 nanos:107017216}" Sep 9 22:14:47.251924 containerd[1584]: time="2025-09-09T22:14:47.251368416Z" level=info msg="received exit event sandbox_id:\"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" exit_status:137 exited_at:{seconds:1757456087 nanos:107017216}" Sep 9 22:14:47.251924 containerd[1584]: time="2025-09-09T22:14:47.251478563Z" level=info msg="received exit event sandbox_id:\"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" exit_status:137 exited_at:{seconds:1757456086 nanos:877358502}" Sep 9 22:14:47.255960 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605-shm.mount: Deactivated successfully. Sep 9 22:14:47.258134 containerd[1584]: time="2025-09-09T22:14:47.258088035Z" level=info msg="TearDown network for sandbox \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" successfully" Sep 9 22:14:47.258325 containerd[1584]: time="2025-09-09T22:14:47.258298891Z" level=info msg="StopPodSandbox for \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" returns successfully" Sep 9 22:14:47.260393 containerd[1584]: time="2025-09-09T22:14:47.258483347Z" level=info msg="TearDown network for sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" successfully" Sep 9 22:14:47.260643 containerd[1584]: time="2025-09-09T22:14:47.260625874Z" level=info msg="StopPodSandbox for \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" returns successfully" Sep 9 22:14:47.332995 kubelet[2811]: I0909 22:14:47.332916 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-xtables-lock\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.332995 kubelet[2811]: I0909 22:14:47.333018 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-run\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.332995 kubelet[2811]: I0909 22:14:47.333103 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 22:14:47.332995 kubelet[2811]: I0909 22:14:47.333102 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 22:14:47.337356 kubelet[2811]: I0909 22:14:47.333186 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mcmw\" (UniqueName: \"kubernetes.io/projected/b4ddd82f-b23b-4294-ac2f-16085266df62-kube-api-access-9mcmw\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.337356 kubelet[2811]: I0909 22:14:47.333211 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-bpf-maps\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.337356 kubelet[2811]: I0909 22:14:47.333264 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnn9x\" (UniqueName: \"kubernetes.io/projected/3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c-kube-api-access-fnn9x\") pod \"3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c\" (UID: \"3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c\") " Sep 9 22:14:47.337356 kubelet[2811]: I0909 22:14:47.333287 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cni-path\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.337356 kubelet[2811]: I0909 22:14:47.333310 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-etc-cni-netd\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.337356 kubelet[2811]: I0909 22:14:47.333353 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-host-proc-sys-kernel\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.341155 kubelet[2811]: I0909 22:14:47.333375 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-hostproc\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.341155 kubelet[2811]: I0909 22:14:47.333441 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cni-path" (OuterVolumeSpecName: "cni-path") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 22:14:47.341155 kubelet[2811]: I0909 22:14:47.334264 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 22:14:47.341155 kubelet[2811]: I0909 22:14:47.334297 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 22:14:47.341155 kubelet[2811]: I0909 22:14:47.334338 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 22:14:47.341633 kubelet[2811]: I0909 22:14:47.334358 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-hostproc" (OuterVolumeSpecName: "hostproc") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 22:14:47.341633 kubelet[2811]: I0909 22:14:47.333417 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-lib-modules\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.341633 kubelet[2811]: I0909 22:14:47.334410 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4ddd82f-b23b-4294-ac2f-16085266df62-clustermesh-secrets\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.342148 kubelet[2811]: I0909 22:14:47.342012 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-cgroup\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.342148 kubelet[2811]: I0909 22:14:47.342073 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-config-path\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.342148 kubelet[2811]: I0909 22:14:47.342096 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c-cilium-config-path\") pod \"3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c\" (UID: \"3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c\") " Sep 9 22:14:47.342148 kubelet[2811]: I0909 22:14:47.342118 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4ddd82f-b23b-4294-ac2f-16085266df62-hubble-tls\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.342317 kubelet[2811]: I0909 22:14:47.342302 2811 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-host-proc-sys-net\") pod \"b4ddd82f-b23b-4294-ac2f-16085266df62\" (UID: \"b4ddd82f-b23b-4294-ac2f-16085266df62\") " Sep 9 22:14:47.342466 kubelet[2811]: I0909 22:14:47.342452 2811 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.342530 kubelet[2811]: I0909 22:14:47.342518 2811 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.345653 kubelet[2811]: I0909 22:14:47.345453 2811 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.345653 kubelet[2811]: I0909 22:14:47.345477 2811 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.345653 kubelet[2811]: I0909 22:14:47.345489 2811 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.345653 kubelet[2811]: I0909 22:14:47.345500 2811 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.345653 kubelet[2811]: I0909 22:14:47.345512 2811 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.345653 kubelet[2811]: I0909 22:14:47.334452 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 22:14:47.345653 kubelet[2811]: I0909 22:14:47.343859 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 22:14:47.346386 kubelet[2811]: I0909 22:14:47.343890 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 22:14:47.356178 kubelet[2811]: I0909 22:14:47.355625 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 22:14:47.361324 kubelet[2811]: I0909 22:14:47.361140 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ddd82f-b23b-4294-ac2f-16085266df62-kube-api-access-9mcmw" (OuterVolumeSpecName: "kube-api-access-9mcmw") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "kube-api-access-9mcmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 22:14:47.361876 kubelet[2811]: I0909 22:14:47.361692 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c" (UID: "3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 22:14:47.363901 kubelet[2811]: I0909 22:14:47.363841 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c-kube-api-access-fnn9x" (OuterVolumeSpecName: "kube-api-access-fnn9x") pod "3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c" (UID: "3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c"). InnerVolumeSpecName "kube-api-access-fnn9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 22:14:47.366016 kubelet[2811]: I0909 22:14:47.365168 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4ddd82f-b23b-4294-ac2f-16085266df62-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 22:14:47.367185 kubelet[2811]: I0909 22:14:47.367109 2811 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4ddd82f-b23b-4294-ac2f-16085266df62-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b4ddd82f-b23b-4294-ac2f-16085266df62" (UID: "b4ddd82f-b23b-4294-ac2f-16085266df62"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 22:14:47.451236 kubelet[2811]: I0909 22:14:47.448880 2811 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.451236 kubelet[2811]: I0909 22:14:47.450513 2811 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4ddd82f-b23b-4294-ac2f-16085266df62-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.452098 kubelet[2811]: I0909 22:14:47.451694 2811 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.452098 kubelet[2811]: I0909 22:14:47.451826 2811 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.452098 kubelet[2811]: I0909 22:14:47.452002 2811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mcmw\" (UniqueName: \"kubernetes.io/projected/b4ddd82f-b23b-4294-ac2f-16085266df62-kube-api-access-9mcmw\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.452098 kubelet[2811]: I0909 22:14:47.452022 2811 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnn9x\" (UniqueName: \"kubernetes.io/projected/3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c-kube-api-access-fnn9x\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.452614 kubelet[2811]: I0909 22:14:47.452194 2811 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.452614 kubelet[2811]: I0909 22:14:47.452214 2811 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4ddd82f-b23b-4294-ac2f-16085266df62-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.452614 kubelet[2811]: I0909 22:14:47.452226 2811 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4ddd82f-b23b-4294-ac2f-16085266df62-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 22:14:47.753433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788-shm.mount: Deactivated successfully. Sep 9 22:14:47.757790 systemd[1]: var-lib-kubelet-pods-3d4ca5c4\x2df14a\x2d4a9d\x2d9f31\x2d92bc61c3ff7c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfnn9x.mount: Deactivated successfully. Sep 9 22:14:47.757957 systemd[1]: var-lib-kubelet-pods-b4ddd82f\x2db23b\x2d4294\x2dac2f\x2d16085266df62-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9mcmw.mount: Deactivated successfully. Sep 9 22:14:47.758083 systemd[1]: var-lib-kubelet-pods-b4ddd82f\x2db23b\x2d4294\x2dac2f\x2d16085266df62-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 22:14:47.758215 systemd[1]: var-lib-kubelet-pods-b4ddd82f\x2db23b\x2d4294\x2dac2f\x2d16085266df62-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 22:14:48.009795 kubelet[2811]: I0909 22:14:48.008240 2811 scope.go:117] "RemoveContainer" containerID="7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a" Sep 9 22:14:48.063624 containerd[1584]: time="2025-09-09T22:14:48.061053112Z" level=info msg="RemoveContainer for \"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\"" Sep 9 22:14:48.076193 systemd[1]: Removed slice kubepods-besteffort-pod3d4ca5c4_f14a_4a9d_9f31_92bc61c3ff7c.slice - libcontainer container kubepods-besteffort-pod3d4ca5c4_f14a_4a9d_9f31_92bc61c3ff7c.slice. Sep 9 22:14:48.111693 systemd[1]: Removed slice kubepods-burstable-podb4ddd82f_b23b_4294_ac2f_16085266df62.slice - libcontainer container kubepods-burstable-podb4ddd82f_b23b_4294_ac2f_16085266df62.slice. Sep 9 22:14:48.113087 systemd[1]: kubepods-burstable-podb4ddd82f_b23b_4294_ac2f_16085266df62.slice: Consumed 10.387s CPU time, 143.6M memory peak, 616K read from disk, 13.3M written to disk. Sep 9 22:14:48.191318 containerd[1584]: time="2025-09-09T22:14:48.190684371Z" level=info msg="RemoveContainer for \"7cc16a80c5d358bd91c85316de5f04ffb55863f6b7d0e6ed3a1e5b5660df800a\" returns successfully" Sep 9 22:14:48.194759 kubelet[2811]: I0909 22:14:48.191963 2811 scope.go:117] "RemoveContainer" containerID="8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029" Sep 9 22:14:48.207902 containerd[1584]: time="2025-09-09T22:14:48.204898748Z" level=info msg="RemoveContainer for \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\"" Sep 9 22:14:48.265523 containerd[1584]: time="2025-09-09T22:14:48.264786646Z" level=info msg="RemoveContainer for \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" returns successfully" Sep 9 22:14:48.265694 kubelet[2811]: I0909 22:14:48.265130 2811 scope.go:117] "RemoveContainer" containerID="765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348" Sep 9 22:14:48.276848 containerd[1584]: time="2025-09-09T22:14:48.276686853Z" level=info msg="RemoveContainer for \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\"" Sep 9 22:14:48.332231 containerd[1584]: time="2025-09-09T22:14:48.332110527Z" level=info msg="RemoveContainer for \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\" returns successfully" Sep 9 22:14:48.344418 kubelet[2811]: I0909 22:14:48.344339 2811 scope.go:117] "RemoveContainer" containerID="e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d" Sep 9 22:14:48.351829 containerd[1584]: time="2025-09-09T22:14:48.350215541Z" level=info msg="RemoveContainer for \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\"" Sep 9 22:14:48.445897 sshd[4658]: Connection closed by 10.0.0.1 port 41086 Sep 9 22:14:48.445068 sshd-session[4655]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:48.479248 containerd[1584]: time="2025-09-09T22:14:48.479096519Z" level=info msg="RemoveContainer for \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\" returns successfully" Sep 9 22:14:48.480312 kubelet[2811]: I0909 22:14:48.480284 2811 scope.go:117] "RemoveContainer" containerID="438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2" Sep 9 22:14:48.502957 containerd[1584]: time="2025-09-09T22:14:48.498158420Z" level=info msg="RemoveContainer for \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\"" Sep 9 22:14:48.518874 systemd[1]: sshd@29-10.0.0.117:22-10.0.0.1:41086.service: Deactivated successfully. Sep 9 22:14:48.534983 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 22:14:48.549477 systemd-logind[1563]: Session 29 logged out. Waiting for processes to exit. Sep 9 22:14:48.555384 systemd[1]: Started sshd@30-10.0.0.117:22-10.0.0.1:41092.service - OpenSSH per-connection server daemon (10.0.0.1:41092). Sep 9 22:14:48.558315 systemd-logind[1563]: Removed session 29. Sep 9 22:14:48.620596 containerd[1584]: time="2025-09-09T22:14:48.620474935Z" level=info msg="RemoveContainer for \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\" returns successfully" Sep 9 22:14:48.621393 kubelet[2811]: I0909 22:14:48.621103 2811 scope.go:117] "RemoveContainer" containerID="a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d" Sep 9 22:14:48.623315 containerd[1584]: time="2025-09-09T22:14:48.623285570Z" level=info msg="RemoveContainer for \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\"" Sep 9 22:14:48.676821 sshd[4804]: Accepted publickey for core from 10.0.0.1 port 41092 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:48.679794 sshd-session[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:48.705900 systemd-logind[1563]: New session 30 of user core. Sep 9 22:14:48.720118 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 22:14:48.786771 containerd[1584]: time="2025-09-09T22:14:48.786602068Z" level=info msg="RemoveContainer for \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\" returns successfully" Sep 9 22:14:48.789626 kubelet[2811]: I0909 22:14:48.788630 2811 scope.go:117] "RemoveContainer" containerID="8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029" Sep 9 22:14:48.789786 containerd[1584]: time="2025-09-09T22:14:48.789048769Z" level=error msg="ContainerStatus for \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\": not found" Sep 9 22:14:48.806468 kubelet[2811]: E0909 22:14:48.806288 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\": not found" containerID="8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029" Sep 9 22:14:48.824942 kubelet[2811]: I0909 22:14:48.806387 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029"} err="failed to get container status \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\": rpc error: code = NotFound desc = an error occurred when try to find container \"8097de9c4cc34cfb66b4114930c72c80175fe6be680299c8c1fe1adf1d1ba029\": not found" Sep 9 22:14:48.825186 kubelet[2811]: I0909 22:14:48.825162 2811 scope.go:117] "RemoveContainer" containerID="765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348" Sep 9 22:14:48.826035 containerd[1584]: time="2025-09-09T22:14:48.825927843Z" level=error msg="ContainerStatus for \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\": not found" Sep 9 22:14:48.827941 kubelet[2811]: E0909 22:14:48.827907 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\": not found" containerID="765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348" Sep 9 22:14:48.828199 kubelet[2811]: I0909 22:14:48.828054 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348"} err="failed to get container status \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\": rpc error: code = NotFound desc = an error occurred when try to find container \"765a19e114d2722452abd71d0c6a6232c6d0b21c714a6f78be214a7f0f454348\": not found" Sep 9 22:14:48.828199 kubelet[2811]: I0909 22:14:48.828096 2811 scope.go:117] "RemoveContainer" containerID="e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d" Sep 9 22:14:48.828457 containerd[1584]: time="2025-09-09T22:14:48.828421452Z" level=error msg="ContainerStatus for \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\": not found" Sep 9 22:14:48.828787 kubelet[2811]: E0909 22:14:48.828686 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\": not found" containerID="e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d" Sep 9 22:14:48.828939 kubelet[2811]: I0909 22:14:48.828868 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d"} err="failed to get container status \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\": rpc error: code = NotFound desc = an error occurred when try to find container \"e38eac7fe417eac831270f7da53877ed15237ac46286278c209c3f39482ce68d\": not found" Sep 9 22:14:48.828939 kubelet[2811]: I0909 22:14:48.828901 2811 scope.go:117] "RemoveContainer" containerID="438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2" Sep 9 22:14:48.829375 containerd[1584]: time="2025-09-09T22:14:48.829300475Z" level=error msg="ContainerStatus for \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\": not found" Sep 9 22:14:48.829511 kubelet[2811]: E0909 22:14:48.829489 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\": not found" containerID="438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2" Sep 9 22:14:48.829679 kubelet[2811]: I0909 22:14:48.829625 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2"} err="failed to get container status \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"438058564076645e3408feb3bfc3fe2f63a4006a922be77756002c2f38e665c2\": not found" Sep 9 22:14:48.829849 kubelet[2811]: I0909 22:14:48.829780 2811 scope.go:117] "RemoveContainer" containerID="a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d" Sep 9 22:14:48.830119 containerd[1584]: time="2025-09-09T22:14:48.830065524Z" level=error msg="ContainerStatus for \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\": not found" Sep 9 22:14:48.830381 kubelet[2811]: E0909 22:14:48.830317 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\": not found" containerID="a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d" Sep 9 22:14:48.830381 kubelet[2811]: I0909 22:14:48.830338 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d"} err="failed to get container status \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a95f62b1fc09b9f11281220048a4f56429dd14f6ff1bdba42395b709f4bc590d\": not found" Sep 9 22:14:48.878046 kubelet[2811]: I0909 22:14:48.875461 2811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c" path="/var/lib/kubelet/pods/3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c/volumes" Sep 9 22:14:48.883578 kubelet[2811]: I0909 22:14:48.881937 2811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4ddd82f-b23b-4294-ac2f-16085266df62" path="/var/lib/kubelet/pods/b4ddd82f-b23b-4294-ac2f-16085266df62/volumes" Sep 9 22:14:50.159388 kubelet[2811]: E0909 22:14:50.159322 2811 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 22:14:50.282398 sshd[4807]: Connection closed by 10.0.0.1 port 41092 Sep 9 22:14:50.283533 sshd-session[4804]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:50.325224 systemd[1]: Started sshd@31-10.0.0.117:22-10.0.0.1:35210.service - OpenSSH per-connection server daemon (10.0.0.1:35210). Sep 9 22:14:50.326095 systemd[1]: sshd@30-10.0.0.117:22-10.0.0.1:41092.service: Deactivated successfully. Sep 9 22:14:50.337374 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 22:14:50.352123 systemd-logind[1563]: Session 30 logged out. Waiting for processes to exit. Sep 9 22:14:50.363257 systemd-logind[1563]: Removed session 30. Sep 9 22:14:50.410018 kubelet[2811]: E0909 22:14:50.408811 2811 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4ddd82f-b23b-4294-ac2f-16085266df62" containerName="mount-cgroup" Sep 9 22:14:50.410018 kubelet[2811]: E0909 22:14:50.408972 2811 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c" containerName="cilium-operator" Sep 9 22:14:50.410018 kubelet[2811]: E0909 22:14:50.408981 2811 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4ddd82f-b23b-4294-ac2f-16085266df62" containerName="mount-bpf-fs" Sep 9 22:14:50.410018 kubelet[2811]: E0909 22:14:50.408991 2811 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4ddd82f-b23b-4294-ac2f-16085266df62" containerName="apply-sysctl-overwrites" Sep 9 22:14:50.410018 kubelet[2811]: E0909 22:14:50.408999 2811 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4ddd82f-b23b-4294-ac2f-16085266df62" containerName="clean-cilium-state" Sep 9 22:14:50.410018 kubelet[2811]: E0909 22:14:50.409006 2811 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b4ddd82f-b23b-4294-ac2f-16085266df62" containerName="cilium-agent" Sep 9 22:14:50.410018 kubelet[2811]: I0909 22:14:50.409043 2811 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d4ca5c4-f14a-4a9d-9f31-92bc61c3ff7c" containerName="cilium-operator" Sep 9 22:14:50.410018 kubelet[2811]: I0909 22:14:50.409052 2811 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4ddd82f-b23b-4294-ac2f-16085266df62" containerName="cilium-agent" Sep 9 22:14:50.426148 systemd[1]: Created slice kubepods-burstable-podbf88d6d6_fc6d_4512_84c4_c63be7835d53.slice - libcontainer container kubepods-burstable-podbf88d6d6_fc6d_4512_84c4_c63be7835d53.slice. Sep 9 22:14:50.449957 kubelet[2811]: I0909 22:14:50.449884 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf88d6d6-fc6d-4512-84c4-c63be7835d53-cilium-cgroup\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.450836 kubelet[2811]: I0909 22:14:50.450767 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf88d6d6-fc6d-4512-84c4-c63be7835d53-hostproc\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.451158 kubelet[2811]: I0909 22:14:50.451133 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf88d6d6-fc6d-4512-84c4-c63be7835d53-cni-path\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.459752 kubelet[2811]: I0909 22:14:50.459282 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf88d6d6-fc6d-4512-84c4-c63be7835d53-cilium-config-path\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.459752 kubelet[2811]: I0909 22:14:50.459336 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bf88d6d6-fc6d-4512-84c4-c63be7835d53-cilium-ipsec-secrets\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.459752 kubelet[2811]: I0909 22:14:50.459362 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf88d6d6-fc6d-4512-84c4-c63be7835d53-cilium-run\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.459752 kubelet[2811]: I0909 22:14:50.459384 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf88d6d6-fc6d-4512-84c4-c63be7835d53-host-proc-sys-kernel\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.459752 kubelet[2811]: I0909 22:14:50.459406 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf88d6d6-fc6d-4512-84c4-c63be7835d53-xtables-lock\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.459752 kubelet[2811]: I0909 22:14:50.459434 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf88d6d6-fc6d-4512-84c4-c63be7835d53-lib-modules\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.460115 kubelet[2811]: I0909 22:14:50.459454 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf88d6d6-fc6d-4512-84c4-c63be7835d53-clustermesh-secrets\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.460115 kubelet[2811]: I0909 22:14:50.459475 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xqxc\" (UniqueName: \"kubernetes.io/projected/bf88d6d6-fc6d-4512-84c4-c63be7835d53-kube-api-access-7xqxc\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.460115 kubelet[2811]: I0909 22:14:50.459521 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf88d6d6-fc6d-4512-84c4-c63be7835d53-bpf-maps\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.460115 kubelet[2811]: I0909 22:14:50.459550 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf88d6d6-fc6d-4512-84c4-c63be7835d53-hubble-tls\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.460115 kubelet[2811]: I0909 22:14:50.459575 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf88d6d6-fc6d-4512-84c4-c63be7835d53-etc-cni-netd\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.460115 kubelet[2811]: I0909 22:14:50.459599 2811 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf88d6d6-fc6d-4512-84c4-c63be7835d53-host-proc-sys-net\") pod \"cilium-jsrlq\" (UID: \"bf88d6d6-fc6d-4512-84c4-c63be7835d53\") " pod="kube-system/cilium-jsrlq" Sep 9 22:14:50.684070 sshd[4816]: Accepted publickey for core from 10.0.0.1 port 35210 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:50.689820 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:50.708622 systemd-logind[1563]: New session 31 of user core. Sep 9 22:14:50.733952 kubelet[2811]: E0909 22:14:50.733891 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:50.741029 containerd[1584]: time="2025-09-09T22:14:50.740614392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jsrlq,Uid:bf88d6d6-fc6d-4512-84c4-c63be7835d53,Namespace:kube-system,Attempt:0,}" Sep 9 22:14:50.755233 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 9 22:14:50.841969 sshd[4826]: Connection closed by 10.0.0.1 port 35210 Sep 9 22:14:50.847981 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Sep 9 22:14:50.857557 containerd[1584]: time="2025-09-09T22:14:50.855671350Z" level=info msg="connecting to shim d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e" address="unix:///run/containerd/s/c27e8e88d4dc81b412497e3fb4549fd45b81ae1d1a0324be3d9514ef236f5e93" namespace=k8s.io protocol=ttrpc version=3 Sep 9 22:14:50.882381 systemd[1]: Started sshd@32-10.0.0.117:22-10.0.0.1:35224.service - OpenSSH per-connection server daemon (10.0.0.1:35224). Sep 9 22:14:50.890929 systemd[1]: sshd@31-10.0.0.117:22-10.0.0.1:35210.service: Deactivated successfully. Sep 9 22:14:50.895097 systemd[1]: session-31.scope: Deactivated successfully. Sep 9 22:14:50.900744 systemd-logind[1563]: Session 31 logged out. Waiting for processes to exit. Sep 9 22:14:50.904609 systemd-logind[1563]: Removed session 31. Sep 9 22:14:50.983608 systemd[1]: Started cri-containerd-d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e.scope - libcontainer container d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e. Sep 9 22:14:51.019350 sshd[4849]: Accepted publickey for core from 10.0.0.1 port 35224 ssh2: RSA SHA256:A2CJI2QL6ueQzwzJUDumHRmawTN/BqpJNEZzUqxCWKo Sep 9 22:14:51.020263 sshd-session[4849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 22:14:51.036899 systemd-logind[1563]: New session 32 of user core. Sep 9 22:14:51.049958 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 9 22:14:51.133783 containerd[1584]: time="2025-09-09T22:14:51.133594111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jsrlq,Uid:bf88d6d6-fc6d-4512-84c4-c63be7835d53,Namespace:kube-system,Attempt:0,} returns sandbox id \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\"" Sep 9 22:14:51.137559 kubelet[2811]: E0909 22:14:51.135544 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:51.142223 containerd[1584]: time="2025-09-09T22:14:51.140046491Z" level=info msg="CreateContainer within sandbox \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 22:14:51.458626 containerd[1584]: time="2025-09-09T22:14:51.457771565Z" level=info msg="Container 4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:14:51.712973 containerd[1584]: time="2025-09-09T22:14:51.712504179Z" level=info msg="CreateContainer within sandbox \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764\"" Sep 9 22:14:51.715063 containerd[1584]: time="2025-09-09T22:14:51.714226753Z" level=info msg="StartContainer for \"4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764\"" Sep 9 22:14:51.720627 containerd[1584]: time="2025-09-09T22:14:51.716199488Z" level=info msg="connecting to shim 4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764" address="unix:///run/containerd/s/c27e8e88d4dc81b412497e3fb4549fd45b81ae1d1a0324be3d9514ef236f5e93" protocol=ttrpc version=3 Sep 9 22:14:51.788413 systemd[1]: Started cri-containerd-4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764.scope - libcontainer container 4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764. Sep 9 22:14:51.872313 kubelet[2811]: E0909 22:14:51.865356 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:51.899280 containerd[1584]: time="2025-09-09T22:14:51.899226163Z" level=info msg="StartContainer for \"4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764\" returns successfully" Sep 9 22:14:51.921754 systemd[1]: cri-containerd-4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764.scope: Deactivated successfully. Sep 9 22:14:51.933798 containerd[1584]: time="2025-09-09T22:14:51.933697525Z" level=info msg="received exit event container_id:\"4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764\" id:\"4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764\" pid:4900 exited_at:{seconds:1757456091 nanos:933144745}" Sep 9 22:14:51.934126 containerd[1584]: time="2025-09-09T22:14:51.933928010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764\" id:\"4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764\" pid:4900 exited_at:{seconds:1757456091 nanos:933144745}" Sep 9 22:14:51.974433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b091da49305f47b9c3eb06f2788a2ba9ef2516d81f49d5aebe2e482b46ff764-rootfs.mount: Deactivated successfully. Sep 9 22:14:52.093872 kubelet[2811]: E0909 22:14:52.093812 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:52.096157 containerd[1584]: time="2025-09-09T22:14:52.096111761Z" level=info msg="CreateContainer within sandbox \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 22:14:52.336605 containerd[1584]: time="2025-09-09T22:14:52.336526908Z" level=info msg="Container 72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:14:52.342007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2381416227.mount: Deactivated successfully. Sep 9 22:14:52.348211 containerd[1584]: time="2025-09-09T22:14:52.347612774Z" level=info msg="CreateContainer within sandbox \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e\"" Sep 9 22:14:52.349020 containerd[1584]: time="2025-09-09T22:14:52.348980370Z" level=info msg="StartContainer for \"72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e\"" Sep 9 22:14:52.350105 containerd[1584]: time="2025-09-09T22:14:52.350054233Z" level=info msg="connecting to shim 72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e" address="unix:///run/containerd/s/c27e8e88d4dc81b412497e3fb4549fd45b81ae1d1a0324be3d9514ef236f5e93" protocol=ttrpc version=3 Sep 9 22:14:52.388042 systemd[1]: Started cri-containerd-72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e.scope - libcontainer container 72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e. Sep 9 22:14:52.428387 containerd[1584]: time="2025-09-09T22:14:52.428317740Z" level=info msg="StartContainer for \"72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e\" returns successfully" Sep 9 22:14:52.439089 systemd[1]: cri-containerd-72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e.scope: Deactivated successfully. Sep 9 22:14:52.440095 containerd[1584]: time="2025-09-09T22:14:52.439917324Z" level=info msg="received exit event container_id:\"72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e\" id:\"72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e\" pid:4944 exited_at:{seconds:1757456092 nanos:439513964}" Sep 9 22:14:52.440342 containerd[1584]: time="2025-09-09T22:14:52.440317307Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e\" id:\"72590ec4190675a6a4ceef5817c06473ca53c7a8cf2d09c993cde7fff789406e\" pid:4944 exited_at:{seconds:1757456092 nanos:439513964}" Sep 9 22:14:53.104170 kubelet[2811]: E0909 22:14:53.104103 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:53.109087 containerd[1584]: time="2025-09-09T22:14:53.107882671Z" level=info msg="CreateContainer within sandbox \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 22:14:53.159613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243101039.mount: Deactivated successfully. Sep 9 22:14:53.160313 containerd[1584]: time="2025-09-09T22:14:53.160188289Z" level=info msg="Container 08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:14:53.188661 containerd[1584]: time="2025-09-09T22:14:53.188565898Z" level=info msg="CreateContainer within sandbox \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1\"" Sep 9 22:14:53.191367 containerd[1584]: time="2025-09-09T22:14:53.190349689Z" level=info msg="StartContainer for \"08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1\"" Sep 9 22:14:53.192313 containerd[1584]: time="2025-09-09T22:14:53.192286310Z" level=info msg="connecting to shim 08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1" address="unix:///run/containerd/s/c27e8e88d4dc81b412497e3fb4549fd45b81ae1d1a0324be3d9514ef236f5e93" protocol=ttrpc version=3 Sep 9 22:14:53.232164 systemd[1]: Started cri-containerd-08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1.scope - libcontainer container 08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1. Sep 9 22:14:53.418371 systemd[1]: cri-containerd-08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1.scope: Deactivated successfully. Sep 9 22:14:53.420912 containerd[1584]: time="2025-09-09T22:14:53.420813702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1\" id:\"08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1\" pid:4987 exited_at:{seconds:1757456093 nanos:419253301}" Sep 9 22:14:53.424579 containerd[1584]: time="2025-09-09T22:14:53.423787016Z" level=info msg="received exit event container_id:\"08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1\" id:\"08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1\" pid:4987 exited_at:{seconds:1757456093 nanos:419253301}" Sep 9 22:14:53.433965 containerd[1584]: time="2025-09-09T22:14:53.433899617Z" level=info msg="StartContainer for \"08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1\" returns successfully" Sep 9 22:14:53.613417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08105ba4c672c8fccc8d2c2a39b86fe6a009f075ec4e5d130c3a66b899f7fdf1-rootfs.mount: Deactivated successfully. Sep 9 22:14:54.111446 kubelet[2811]: E0909 22:14:54.111339 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:54.114120 containerd[1584]: time="2025-09-09T22:14:54.114063008Z" level=info msg="CreateContainer within sandbox \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 22:14:54.331749 containerd[1584]: time="2025-09-09T22:14:54.330019322Z" level=info msg="Container 9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:14:54.453884 containerd[1584]: time="2025-09-09T22:14:54.453617564Z" level=info msg="CreateContainer within sandbox \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87\"" Sep 9 22:14:54.454543 containerd[1584]: time="2025-09-09T22:14:54.454409036Z" level=info msg="StartContainer for \"9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87\"" Sep 9 22:14:54.455658 containerd[1584]: time="2025-09-09T22:14:54.455612396Z" level=info msg="connecting to shim 9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87" address="unix:///run/containerd/s/c27e8e88d4dc81b412497e3fb4549fd45b81ae1d1a0324be3d9514ef236f5e93" protocol=ttrpc version=3 Sep 9 22:14:54.488221 systemd[1]: Started cri-containerd-9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87.scope - libcontainer container 9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87. Sep 9 22:14:54.532618 systemd[1]: cri-containerd-9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87.scope: Deactivated successfully. Sep 9 22:14:54.534031 containerd[1584]: time="2025-09-09T22:14:54.533618253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87\" id:\"9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87\" pid:5025 exited_at:{seconds:1757456094 nanos:533092241}" Sep 9 22:14:54.536956 containerd[1584]: time="2025-09-09T22:14:54.536894221Z" level=info msg="received exit event container_id:\"9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87\" id:\"9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87\" pid:5025 exited_at:{seconds:1757456094 nanos:533092241}" Sep 9 22:14:54.549039 containerd[1584]: time="2025-09-09T22:14:54.548939705Z" level=info msg="StartContainer for \"9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87\" returns successfully" Sep 9 22:14:54.569587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9db0d6979767183e56e0d6c4e0886626e86d4a3ed5d14aeddef256d848bbca87-rootfs.mount: Deactivated successfully. Sep 9 22:14:55.119098 kubelet[2811]: E0909 22:14:55.119051 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:55.122688 containerd[1584]: time="2025-09-09T22:14:55.122624120Z" level=info msg="CreateContainer within sandbox \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 22:14:55.142883 containerd[1584]: time="2025-09-09T22:14:55.141919421Z" level=info msg="Container 5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e: CDI devices from CRI Config.CDIDevices: []" Sep 9 22:14:55.158981 containerd[1584]: time="2025-09-09T22:14:55.158899916Z" level=info msg="CreateContainer within sandbox \"d06d1f8b9c159e535efc061b3bfff3a9f5684e1f6629d68d767cd5a1cacda56e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e\"" Sep 9 22:14:55.159723 containerd[1584]: time="2025-09-09T22:14:55.159659229Z" level=info msg="StartContainer for \"5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e\"" Sep 9 22:14:55.161382 kubelet[2811]: E0909 22:14:55.161343 2811 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 22:14:55.161476 containerd[1584]: time="2025-09-09T22:14:55.161436362Z" level=info msg="connecting to shim 5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e" address="unix:///run/containerd/s/c27e8e88d4dc81b412497e3fb4549fd45b81ae1d1a0324be3d9514ef236f5e93" protocol=ttrpc version=3 Sep 9 22:14:55.189033 systemd[1]: Started cri-containerd-5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e.scope - libcontainer container 5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e. Sep 9 22:14:55.248849 containerd[1584]: time="2025-09-09T22:14:55.248666217Z" level=info msg="StartContainer for \"5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e\" returns successfully" Sep 9 22:14:55.333491 containerd[1584]: time="2025-09-09T22:14:55.333418900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e\" id:\"78ef6978eecea179886c9e21411d1bbc01d2af40f68d40100776500c3583abbd\" pid:5100 exited_at:{seconds:1757456095 nanos:332945556}" Sep 9 22:14:55.973794 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 9 22:14:56.129754 kubelet[2811]: E0909 22:14:56.129678 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:56.161205 kubelet[2811]: I0909 22:14:56.161107 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jsrlq" podStartSLOduration=6.161083036 podStartE2EDuration="6.161083036s" podCreationTimestamp="2025-09-09 22:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 22:14:56.15497581 +0000 UTC m=+172.585501213" watchObservedRunningTime="2025-09-09 22:14:56.161083036 +0000 UTC m=+172.591608439" Sep 9 22:14:57.133795 kubelet[2811]: E0909 22:14:57.133747 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:14:57.868692 containerd[1584]: time="2025-09-09T22:14:57.868620361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e\" id:\"4b9f9a5bdf7459bc76d24e65fea565e126491eb8436b85935d1ad3cca081f640\" pid:5238 exit_status:1 exited_at:{seconds:1757456097 nanos:868176593}" Sep 9 22:14:58.442573 kubelet[2811]: I0909 22:14:58.442504 2811 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T22:14:58Z","lastTransitionTime":"2025-09-09T22:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 22:14:59.611176 systemd-networkd[1474]: lxc_health: Link UP Sep 9 22:14:59.612810 systemd-networkd[1474]: lxc_health: Gained carrier Sep 9 22:15:00.276772 containerd[1584]: time="2025-09-09T22:15:00.276627451Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e\" id:\"2ef0dd8f24172e296df080f8a19078ee11dc388b64c30966820ffc921c6ef540\" pid:5621 exited_at:{seconds:1757456100 nanos:275146390}" Sep 9 22:15:00.283046 kubelet[2811]: E0909 22:15:00.280839 2811 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37246->127.0.0.1:45855: write tcp 127.0.0.1:37246->127.0.0.1:45855: write: broken pipe Sep 9 22:15:00.736831 kubelet[2811]: E0909 22:15:00.736775 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:15:00.824970 systemd-networkd[1474]: lxc_health: Gained IPv6LL Sep 9 22:15:01.143564 kubelet[2811]: E0909 22:15:01.143511 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:15:01.865755 kubelet[2811]: E0909 22:15:01.865626 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:15:02.145999 kubelet[2811]: E0909 22:15:02.145831 2811 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 22:15:02.414918 containerd[1584]: time="2025-09-09T22:15:02.414528260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e\" id:\"fd05691233b7ebe60dd26f23bfc615c758604e6db9ce7d71ce083d62921a2fb3\" pid:5657 exited_at:{seconds:1757456102 nanos:414107643}" Sep 9 22:15:04.567427 containerd[1584]: time="2025-09-09T22:15:04.567084209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e\" id:\"581339a6571fd942de6bbbcc9851f8fefae607910879b82b95d7db109adb3094\" pid:5691 exited_at:{seconds:1757456104 nanos:566599040}" Sep 9 22:15:04.842844 containerd[1584]: time="2025-09-09T22:15:04.842395857Z" level=info msg="StopPodSandbox for \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\"" Sep 9 22:15:04.842844 containerd[1584]: time="2025-09-09T22:15:04.842605163Z" level=info msg="TearDown network for sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" successfully" Sep 9 22:15:04.842844 containerd[1584]: time="2025-09-09T22:15:04.842623878Z" level=info msg="StopPodSandbox for \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" returns successfully" Sep 9 22:15:04.843251 containerd[1584]: time="2025-09-09T22:15:04.843028325Z" level=info msg="RemovePodSandbox for \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\"" Sep 9 22:15:04.843251 containerd[1584]: time="2025-09-09T22:15:04.843072067Z" level=info msg="Forcibly stopping sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\"" Sep 9 22:15:04.843251 containerd[1584]: time="2025-09-09T22:15:04.843151057Z" level=info msg="TearDown network for sandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" successfully" Sep 9 22:15:04.845392 containerd[1584]: time="2025-09-09T22:15:04.845341986Z" level=info msg="Ensure that sandbox 9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788 in task-service has been cleanup successfully" Sep 9 22:15:04.857583 containerd[1584]: time="2025-09-09T22:15:04.857475984Z" level=info msg="RemovePodSandbox \"9efcfb911708b17146bbb5f84f46a861f7ea072204a4bfe5bdeebaad0ed6c788\" returns successfully" Sep 9 22:15:04.859625 containerd[1584]: time="2025-09-09T22:15:04.859552486Z" level=info msg="StopPodSandbox for \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\"" Sep 9 22:15:04.859903 containerd[1584]: time="2025-09-09T22:15:04.859791959Z" level=info msg="TearDown network for sandbox \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" successfully" Sep 9 22:15:04.859903 containerd[1584]: time="2025-09-09T22:15:04.859817829Z" level=info msg="StopPodSandbox for \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" returns successfully" Sep 9 22:15:04.860555 containerd[1584]: time="2025-09-09T22:15:04.860400162Z" level=info msg="RemovePodSandbox for \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\"" Sep 9 22:15:04.860603 containerd[1584]: time="2025-09-09T22:15:04.860562078Z" level=info msg="Forcibly stopping sandbox \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\"" Sep 9 22:15:04.860802 containerd[1584]: time="2025-09-09T22:15:04.860755404Z" level=info msg="TearDown network for sandbox \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" successfully" Sep 9 22:15:04.862824 containerd[1584]: time="2025-09-09T22:15:04.862760030Z" level=info msg="Ensure that sandbox 34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605 in task-service has been cleanup successfully" Sep 9 22:15:04.978314 containerd[1584]: time="2025-09-09T22:15:04.978198241Z" level=info msg="RemovePodSandbox \"34d0fca3a76acda13e23eb4de4513535648a20deb0c35114f3a2563ef2af7605\" returns successfully" Sep 9 22:15:06.664733 containerd[1584]: time="2025-09-09T22:15:06.664657996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d256d5a4a64b2c973c038291f13d5207085295dc4b770f84d55ffcba9605d6e\" id:\"5bb430d42d3f00966b8492e00ffe60bd6667d8c93fed1b9c3ff064369b800bb4\" pid:5718 exited_at:{seconds:1757456106 nanos:664224133}" Sep 9 22:15:06.692083 sshd[4876]: Connection closed by 10.0.0.1 port 35224 Sep 9 22:15:06.692531 sshd-session[4849]: pam_unix(sshd:session): session closed for user core Sep 9 22:15:06.696920 systemd[1]: sshd@32-10.0.0.117:22-10.0.0.1:35224.service: Deactivated successfully. Sep 9 22:15:06.699087 systemd[1]: session-32.scope: Deactivated successfully. Sep 9 22:15:06.700055 systemd-logind[1563]: Session 32 logged out. Waiting for processes to exit. Sep 9 22:15:06.701406 systemd-logind[1563]: Removed session 32.