Sep 13 10:13:40.116207 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sat Sep 13 08:30:13 -00 2025 Sep 13 10:13:40.116249 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:13:40.116261 kernel: BIOS-provided physical RAM map: Sep 13 10:13:40.116270 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 10:13:40.116279 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 10:13:40.116287 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 10:13:40.116298 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 10:13:40.116307 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 10:13:40.116318 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 10:13:40.116327 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 10:13:40.116336 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 10:13:40.116345 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 10:13:40.116353 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 10:13:40.116363 kernel: NX (Execute Disable) protection: active Sep 13 10:13:40.116376 kernel: APIC: Static calls initialized Sep 13 10:13:40.116386 kernel: SMBIOS 2.8 present. Sep 13 10:13:40.116396 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 10:13:40.116406 kernel: DMI: Memory slots populated: 1/1 Sep 13 10:13:40.116415 kernel: Hypervisor detected: KVM Sep 13 10:13:40.116425 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 10:13:40.116435 kernel: kvm-clock: using sched offset of 3295620983 cycles Sep 13 10:13:40.116445 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 10:13:40.116456 kernel: tsc: Detected 2794.748 MHz processor Sep 13 10:13:40.116469 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 10:13:40.116479 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 10:13:40.116489 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 10:13:40.116499 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 10:13:40.116509 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 10:13:40.116519 kernel: Using GB pages for direct mapping Sep 13 10:13:40.116529 kernel: ACPI: Early table checksum verification disabled Sep 13 10:13:40.116538 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 10:13:40.116548 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:13:40.116561 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:13:40.116571 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:13:40.116581 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 10:13:40.116591 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:13:40.116601 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:13:40.116610 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:13:40.116620 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 10:13:40.116630 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 10:13:40.116648 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 10:13:40.116657 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 10:13:40.116667 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 10:13:40.116678 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 10:13:40.116688 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 10:13:40.116698 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 10:13:40.116711 kernel: No NUMA configuration found Sep 13 10:13:40.116721 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 10:13:40.116731 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 13 10:13:40.116741 kernel: Zone ranges: Sep 13 10:13:40.116751 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 10:13:40.116761 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 10:13:40.116771 kernel: Normal empty Sep 13 10:13:40.116781 kernel: Device empty Sep 13 10:13:40.116791 kernel: Movable zone start for each node Sep 13 10:13:40.116801 kernel: Early memory node ranges Sep 13 10:13:40.116815 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 10:13:40.116825 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 10:13:40.116835 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 10:13:40.116845 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 10:13:40.116854 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 10:13:40.116865 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 10:13:40.116875 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 10:13:40.116885 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 10:13:40.116895 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 10:13:40.116909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 10:13:40.116919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 10:13:40.116929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 10:13:40.116939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 10:13:40.116950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 10:13:40.116960 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 10:13:40.116970 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 10:13:40.116997 kernel: TSC deadline timer available Sep 13 10:13:40.117007 kernel: CPU topo: Max. logical packages: 1 Sep 13 10:13:40.117022 kernel: CPU topo: Max. logical dies: 1 Sep 13 10:13:40.117032 kernel: CPU topo: Max. dies per package: 1 Sep 13 10:13:40.117042 kernel: CPU topo: Max. threads per core: 1 Sep 13 10:13:40.117052 kernel: CPU topo: Num. cores per package: 4 Sep 13 10:13:40.117062 kernel: CPU topo: Num. threads per package: 4 Sep 13 10:13:40.117072 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 13 10:13:40.117083 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 10:13:40.117093 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 10:13:40.117103 kernel: kvm-guest: setup PV sched yield Sep 13 10:13:40.117113 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 10:13:40.117126 kernel: Booting paravirtualized kernel on KVM Sep 13 10:13:40.117137 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 10:13:40.117147 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 10:13:40.117157 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 13 10:13:40.117167 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 13 10:13:40.117176 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 10:13:40.117186 kernel: kvm-guest: PV spinlocks enabled Sep 13 10:13:40.117196 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 10:13:40.117207 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:13:40.117234 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 10:13:40.117244 kernel: random: crng init done Sep 13 10:13:40.117254 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 10:13:40.117264 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 10:13:40.117274 kernel: Fallback order for Node 0: 0 Sep 13 10:13:40.117284 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 13 10:13:40.117294 kernel: Policy zone: DMA32 Sep 13 10:13:40.117304 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 10:13:40.117317 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 10:13:40.117327 kernel: ftrace: allocating 40125 entries in 157 pages Sep 13 10:13:40.117337 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 10:13:40.117347 kernel: Dynamic Preempt: voluntary Sep 13 10:13:40.117357 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 10:13:40.117367 kernel: rcu: RCU event tracing is enabled. Sep 13 10:13:40.117378 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 10:13:40.117388 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 10:13:40.117398 kernel: Rude variant of Tasks RCU enabled. Sep 13 10:13:40.117411 kernel: Tracing variant of Tasks RCU enabled. Sep 13 10:13:40.117423 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 10:13:40.117434 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 10:13:40.117445 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:13:40.117455 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:13:40.117466 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 10:13:40.117476 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 10:13:40.117486 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 10:13:40.117506 kernel: Console: colour VGA+ 80x25 Sep 13 10:13:40.117517 kernel: printk: legacy console [ttyS0] enabled Sep 13 10:13:40.117527 kernel: ACPI: Core revision 20240827 Sep 13 10:13:40.117538 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 10:13:40.117551 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 10:13:40.117561 kernel: x2apic enabled Sep 13 10:13:40.117571 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 10:13:40.117581 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 10:13:40.117592 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 10:13:40.117605 kernel: kvm-guest: setup PV IPIs Sep 13 10:13:40.117615 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 10:13:40.117626 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:13:40.117636 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 10:13:40.117647 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 10:13:40.117657 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 10:13:40.117668 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 10:13:40.117678 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 10:13:40.117688 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 10:13:40.117701 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 10:13:40.117712 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 10:13:40.117722 kernel: active return thunk: retbleed_return_thunk Sep 13 10:13:40.117732 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 10:13:40.117742 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 10:13:40.117753 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 10:13:40.117763 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 10:13:40.117774 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 10:13:40.117787 kernel: active return thunk: srso_return_thunk Sep 13 10:13:40.117798 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 10:13:40.117808 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 10:13:40.117818 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 10:13:40.117829 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 10:13:40.117839 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 10:13:40.117850 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 10:13:40.117860 kernel: Freeing SMP alternatives memory: 32K Sep 13 10:13:40.117870 kernel: pid_max: default: 32768 minimum: 301 Sep 13 10:13:40.117884 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 10:13:40.117895 kernel: landlock: Up and running. Sep 13 10:13:40.117905 kernel: SELinux: Initializing. Sep 13 10:13:40.117915 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:13:40.117925 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 10:13:40.117936 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 10:13:40.117946 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 10:13:40.117956 kernel: ... version: 0 Sep 13 10:13:40.117967 kernel: ... bit width: 48 Sep 13 10:13:40.117996 kernel: ... generic registers: 6 Sep 13 10:13:40.118007 kernel: ... value mask: 0000ffffffffffff Sep 13 10:13:40.118017 kernel: ... max period: 00007fffffffffff Sep 13 10:13:40.118028 kernel: ... fixed-purpose events: 0 Sep 13 10:13:40.118038 kernel: ... event mask: 000000000000003f Sep 13 10:13:40.118047 kernel: signal: max sigframe size: 1776 Sep 13 10:13:40.118058 kernel: rcu: Hierarchical SRCU implementation. Sep 13 10:13:40.118068 kernel: rcu: Max phase no-delay instances is 400. Sep 13 10:13:40.118079 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 13 10:13:40.118093 kernel: smp: Bringing up secondary CPUs ... Sep 13 10:13:40.118103 kernel: smpboot: x86: Booting SMP configuration: Sep 13 10:13:40.118114 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 10:13:40.118124 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 10:13:40.118135 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 10:13:40.118147 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54088K init, 2876K bss, 136904K reserved, 0K cma-reserved) Sep 13 10:13:40.118158 kernel: devtmpfs: initialized Sep 13 10:13:40.118169 kernel: x86/mm: Memory block size: 128MB Sep 13 10:13:40.118179 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 10:13:40.118193 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 10:13:40.118204 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 10:13:40.118224 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 10:13:40.118235 kernel: audit: initializing netlink subsys (disabled) Sep 13 10:13:40.118245 kernel: audit: type=2000 audit(1757758417.095:1): state=initialized audit_enabled=0 res=1 Sep 13 10:13:40.118256 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 10:13:40.118267 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 10:13:40.118278 kernel: cpuidle: using governor menu Sep 13 10:13:40.118289 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 10:13:40.118303 kernel: dca service started, version 1.12.1 Sep 13 10:13:40.118314 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 13 10:13:40.118325 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 10:13:40.118337 kernel: PCI: Using configuration type 1 for base access Sep 13 10:13:40.118348 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 10:13:40.118359 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 10:13:40.118370 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 10:13:40.118380 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 10:13:40.118391 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 10:13:40.118405 kernel: ACPI: Added _OSI(Module Device) Sep 13 10:13:40.118416 kernel: ACPI: Added _OSI(Processor Device) Sep 13 10:13:40.118427 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 10:13:40.118438 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 10:13:40.118449 kernel: ACPI: Interpreter enabled Sep 13 10:13:40.118460 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 10:13:40.118471 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 10:13:40.118482 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 10:13:40.118492 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 10:13:40.118506 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 10:13:40.118518 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 10:13:40.118737 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 10:13:40.118899 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 10:13:40.119072 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 10:13:40.119088 kernel: PCI host bridge to bus 0000:00 Sep 13 10:13:40.119255 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 10:13:40.119404 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 10:13:40.119545 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 10:13:40.119683 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 10:13:40.119821 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 10:13:40.119956 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 10:13:40.120124 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 10:13:40.120317 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 13 10:13:40.120492 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 13 10:13:40.120649 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 13 10:13:40.120798 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 13 10:13:40.120921 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 13 10:13:40.121119 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 10:13:40.121300 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 13 10:13:40.121430 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 13 10:13:40.121551 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 13 10:13:40.121670 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 10:13:40.121799 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 13 10:13:40.121921 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 13 10:13:40.122068 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 13 10:13:40.122190 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 10:13:40.122355 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 13 10:13:40.122488 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 13 10:13:40.122610 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 13 10:13:40.122729 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 10:13:40.122848 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 13 10:13:40.122992 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 13 10:13:40.123123 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 10:13:40.123268 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 13 10:13:40.123390 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 13 10:13:40.123539 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 13 10:13:40.123710 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 13 10:13:40.123867 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 13 10:13:40.123882 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 10:13:40.123897 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 10:13:40.123907 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 10:13:40.123917 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 10:13:40.123927 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 10:13:40.123937 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 10:13:40.123946 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 10:13:40.123956 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 10:13:40.123966 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 10:13:40.123993 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 10:13:40.124007 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 10:13:40.124017 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 10:13:40.124027 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 10:13:40.124037 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 10:13:40.124047 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 10:13:40.124058 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 10:13:40.124068 kernel: iommu: Default domain type: Translated Sep 13 10:13:40.124078 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 10:13:40.124088 kernel: PCI: Using ACPI for IRQ routing Sep 13 10:13:40.124098 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 10:13:40.124111 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 10:13:40.124121 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 10:13:40.124292 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 10:13:40.124449 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 10:13:40.124610 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 10:13:40.124626 kernel: vgaarb: loaded Sep 13 10:13:40.124638 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 10:13:40.124650 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 10:13:40.124665 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 10:13:40.124676 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 10:13:40.124688 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 10:13:40.124699 kernel: pnp: PnP ACPI init Sep 13 10:13:40.124871 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 10:13:40.124889 kernel: pnp: PnP ACPI: found 6 devices Sep 13 10:13:40.124900 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 10:13:40.124912 kernel: NET: Registered PF_INET protocol family Sep 13 10:13:40.124927 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 10:13:40.124939 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 10:13:40.124957 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 10:13:40.124989 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 10:13:40.125011 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 10:13:40.125033 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 10:13:40.125054 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:13:40.125073 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 10:13:40.125094 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 10:13:40.125126 kernel: NET: Registered PF_XDP protocol family Sep 13 10:13:40.125407 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 10:13:40.125677 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 10:13:40.125961 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 10:13:40.126131 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 10:13:40.126341 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 10:13:40.126489 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 10:13:40.126506 kernel: PCI: CLS 0 bytes, default 64 Sep 13 10:13:40.126523 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 10:13:40.126535 kernel: Initialise system trusted keyrings Sep 13 10:13:40.126546 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 10:13:40.126557 kernel: Key type asymmetric registered Sep 13 10:13:40.126568 kernel: Asymmetric key parser 'x509' registered Sep 13 10:13:40.126580 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 10:13:40.126591 kernel: io scheduler mq-deadline registered Sep 13 10:13:40.126603 kernel: io scheduler kyber registered Sep 13 10:13:40.126614 kernel: io scheduler bfq registered Sep 13 10:13:40.126628 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 10:13:40.126640 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 10:13:40.126652 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 10:13:40.126663 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 10:13:40.126674 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 10:13:40.126685 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 10:13:40.126697 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 10:13:40.126708 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 10:13:40.126719 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 10:13:40.126893 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 10:13:40.126911 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 10:13:40.127111 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 10:13:40.127276 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T10:13:39 UTC (1757758419) Sep 13 10:13:40.127425 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 10:13:40.127441 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 10:13:40.127453 kernel: NET: Registered PF_INET6 protocol family Sep 13 10:13:40.127464 kernel: Segment Routing with IPv6 Sep 13 10:13:40.127480 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 10:13:40.127491 kernel: NET: Registered PF_PACKET protocol family Sep 13 10:13:40.127503 kernel: Key type dns_resolver registered Sep 13 10:13:40.127514 kernel: IPI shorthand broadcast: enabled Sep 13 10:13:40.127525 kernel: sched_clock: Marking stable (3387002089, 123945390)->(3569736035, -58788556) Sep 13 10:13:40.127536 kernel: registered taskstats version 1 Sep 13 10:13:40.127547 kernel: Loading compiled-in X.509 certificates Sep 13 10:13:40.127559 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: cbb54677ad1c578839cdade5ab8500bbdb72e350' Sep 13 10:13:40.127570 kernel: Demotion targets for Node 0: null Sep 13 10:13:40.127584 kernel: Key type .fscrypt registered Sep 13 10:13:40.127595 kernel: Key type fscrypt-provisioning registered Sep 13 10:13:40.127607 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 10:13:40.127618 kernel: ima: Allocated hash algorithm: sha1 Sep 13 10:13:40.127629 kernel: ima: No architecture policies found Sep 13 10:13:40.127641 kernel: clk: Disabling unused clocks Sep 13 10:13:40.127652 kernel: Warning: unable to open an initial console. Sep 13 10:13:40.127663 kernel: Freeing unused kernel image (initmem) memory: 54088K Sep 13 10:13:40.127675 kernel: Write protecting the kernel read-only data: 24576k Sep 13 10:13:40.127689 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 13 10:13:40.127700 kernel: Run /init as init process Sep 13 10:13:40.127712 kernel: with arguments: Sep 13 10:13:40.127723 kernel: /init Sep 13 10:13:40.127734 kernel: with environment: Sep 13 10:13:40.127744 kernel: HOME=/ Sep 13 10:13:40.127755 kernel: TERM=linux Sep 13 10:13:40.127766 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 10:13:40.127779 systemd[1]: Successfully made /usr/ read-only. Sep 13 10:13:40.127830 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:13:40.127847 systemd[1]: Detected virtualization kvm. Sep 13 10:13:40.127859 systemd[1]: Detected architecture x86-64. Sep 13 10:13:40.127874 systemd[1]: Running in initrd. Sep 13 10:13:40.127886 systemd[1]: No hostname configured, using default hostname. Sep 13 10:13:40.127901 systemd[1]: Hostname set to . Sep 13 10:13:40.127913 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:13:40.127925 systemd[1]: Queued start job for default target initrd.target. Sep 13 10:13:40.127938 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:13:40.127950 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:13:40.127963 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 10:13:40.127991 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:13:40.128004 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 10:13:40.128021 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 10:13:40.128035 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 10:13:40.128048 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 10:13:40.128061 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:13:40.128073 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:13:40.128086 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:13:40.128098 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:13:40.128114 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:13:40.128126 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:13:40.128139 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:13:40.128152 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:13:40.128165 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 10:13:40.128177 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 10:13:40.128189 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:13:40.128202 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:13:40.128224 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:13:40.128277 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:13:40.128289 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 10:13:40.128302 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:13:40.128324 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 10:13:40.128337 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 10:13:40.128359 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 10:13:40.128371 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:13:40.128384 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:13:40.128396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:13:40.128409 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 10:13:40.128433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:13:40.128452 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 10:13:40.128466 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 10:13:40.128478 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 10:13:40.128522 systemd-journald[220]: Collecting audit messages is disabled. Sep 13 10:13:40.128564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:13:40.128577 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 10:13:40.128599 systemd-journald[220]: Journal started Sep 13 10:13:40.128634 systemd-journald[220]: Runtime Journal (/run/log/journal/6d1f4e0ad6184f8789d1a7137a3e3c2f) is 6M, max 48.6M, 42.5M free. Sep 13 10:13:40.097894 systemd-modules-load[221]: Inserted module 'overlay' Sep 13 10:13:40.164602 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:13:40.164684 kernel: Bridge firewalling registered Sep 13 10:13:40.130251 systemd-modules-load[221]: Inserted module 'br_netfilter' Sep 13 10:13:40.166488 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:13:40.169122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:13:40.171737 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:13:40.179092 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 10:13:40.182774 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:13:40.183600 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:13:40.203631 systemd-tmpfiles[245]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 10:13:40.208339 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:13:40.209344 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:13:40.211446 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:13:40.215076 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:13:40.220713 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 10:13:40.241399 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=29913b080383fb09f846b4e8f22e4ebe48c8b17d0cc2b8191530bb5bda42eda0 Sep 13 10:13:40.262145 systemd-resolved[260]: Positive Trust Anchors: Sep 13 10:13:40.262160 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:13:40.262194 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:13:40.265017 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 13 10:13:40.266334 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:13:40.272815 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:13:40.375026 kernel: SCSI subsystem initialized Sep 13 10:13:40.384014 kernel: Loading iSCSI transport class v2.0-870. Sep 13 10:13:40.395002 kernel: iscsi: registered transport (tcp) Sep 13 10:13:40.420018 kernel: iscsi: registered transport (qla4xxx) Sep 13 10:13:40.420086 kernel: QLogic iSCSI HBA Driver Sep 13 10:13:40.443409 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:13:40.470696 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:13:40.473006 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:13:40.544887 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 10:13:40.547482 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 10:13:40.618029 kernel: raid6: avx2x4 gen() 21392 MB/s Sep 13 10:13:40.635016 kernel: raid6: avx2x2 gen() 28070 MB/s Sep 13 10:13:40.652077 kernel: raid6: avx2x1 gen() 25750 MB/s Sep 13 10:13:40.652112 kernel: raid6: using algorithm avx2x2 gen() 28070 MB/s Sep 13 10:13:40.670058 kernel: raid6: .... xor() 19948 MB/s, rmw enabled Sep 13 10:13:40.670090 kernel: raid6: using avx2x2 recovery algorithm Sep 13 10:13:40.690012 kernel: xor: automatically using best checksumming function avx Sep 13 10:13:40.853037 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 10:13:40.862287 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:13:40.865368 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:13:40.907654 systemd-udevd[471]: Using default interface naming scheme 'v255'. Sep 13 10:13:40.914339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:13:40.915328 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 10:13:40.945720 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Sep 13 10:13:40.978165 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:13:40.979998 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:13:41.052367 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:13:41.056782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 10:13:41.090998 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 10:13:41.101991 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 10:13:41.110026 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 10:13:41.110081 kernel: GPT:9289727 != 19775487 Sep 13 10:13:41.110092 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 10:13:41.110102 kernel: GPT:9289727 != 19775487 Sep 13 10:13:41.110112 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 10:13:41.110243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:13:41.110995 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 10:13:41.116127 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 13 10:13:41.123024 kernel: libata version 3.00 loaded. Sep 13 10:13:41.131010 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 10:13:41.134052 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 10:13:41.134090 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 13 10:13:41.137555 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 13 10:13:41.137775 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 10:13:41.140095 kernel: AES CTR mode by8 optimization enabled Sep 13 10:13:41.143049 kernel: scsi host0: ahci Sep 13 10:13:41.143274 kernel: scsi host1: ahci Sep 13 10:13:41.143495 kernel: scsi host2: ahci Sep 13 10:13:41.144202 kernel: scsi host3: ahci Sep 13 10:13:41.145004 kernel: scsi host4: ahci Sep 13 10:13:41.146615 kernel: scsi host5: ahci Sep 13 10:13:41.146821 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 13 10:13:41.146838 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 13 10:13:41.145495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:13:41.168416 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 13 10:13:41.168459 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 13 10:13:41.168475 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 13 10:13:41.168489 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 13 10:13:41.145668 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:13:41.170744 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:13:41.175328 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:13:41.178550 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:13:41.205458 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 10:13:41.214480 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 10:13:41.253850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:13:41.265369 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:13:41.274476 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 10:13:41.275958 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 10:13:41.281377 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 10:13:41.319213 disk-uuid[622]: Primary Header is updated. Sep 13 10:13:41.319213 disk-uuid[622]: Secondary Entries is updated. Sep 13 10:13:41.319213 disk-uuid[622]: Secondary Header is updated. Sep 13 10:13:41.324007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:13:41.327995 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:13:41.461029 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 10:13:41.461166 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 10:13:41.461463 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 10:13:41.462003 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:13:41.462540 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 10:13:41.463752 kernel: ata3.00: applying bridge limits Sep 13 10:13:41.464543 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 10:13:41.464567 kernel: ata3.00: configured for UDMA/100 Sep 13 10:13:41.466010 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 10:13:41.470004 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 10:13:41.470039 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 10:13:41.471000 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 10:13:41.530007 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 10:13:41.530347 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 10:13:41.545002 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 10:13:41.913725 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 10:13:41.914489 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:13:41.917323 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:13:41.917527 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:13:41.918915 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 10:13:41.952263 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:13:42.331004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 10:13:42.331262 disk-uuid[623]: The operation has completed successfully. Sep 13 10:13:42.363014 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 10:13:42.363171 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 10:13:42.405234 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 10:13:42.428311 sh[663]: Success Sep 13 10:13:42.448123 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 10:13:42.448202 kernel: device-mapper: uevent: version 1.0.3 Sep 13 10:13:42.449188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 10:13:42.458994 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 13 10:13:42.495805 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 10:13:42.500346 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 10:13:42.519336 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 10:13:42.528470 kernel: BTRFS: device fsid fbf3e737-db97-4ff7-a1f5-c4d4b7390663 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (675) Sep 13 10:13:42.528507 kernel: BTRFS info (device dm-0): first mount of filesystem fbf3e737-db97-4ff7-a1f5-c4d4b7390663 Sep 13 10:13:42.528518 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:13:42.535049 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 10:13:42.535112 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 10:13:42.536754 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 10:13:42.537561 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:13:42.539653 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 10:13:42.540807 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 10:13:42.545050 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 10:13:42.575090 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Sep 13 10:13:42.577398 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:13:42.577469 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:13:42.581044 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:13:42.581129 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:13:42.586009 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:13:42.587879 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 10:13:42.589324 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 10:13:42.778417 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:13:42.787590 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:13:42.859832 systemd-networkd[851]: lo: Link UP Sep 13 10:13:42.859851 systemd-networkd[851]: lo: Gained carrier Sep 13 10:13:42.863809 systemd-networkd[851]: Enumeration completed Sep 13 10:13:42.863963 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:13:42.864917 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:13:42.864928 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:13:42.865949 systemd-networkd[851]: eth0: Link UP Sep 13 10:13:42.866881 systemd[1]: Reached target network.target - Network. Sep 13 10:13:42.875675 systemd-networkd[851]: eth0: Gained carrier Sep 13 10:13:42.875710 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:13:42.885810 ignition[752]: Ignition 2.22.0 Sep 13 10:13:42.885828 ignition[752]: Stage: fetch-offline Sep 13 10:13:42.885867 ignition[752]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:13:42.885877 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:13:42.885991 ignition[752]: parsed url from cmdline: "" Sep 13 10:13:42.885995 ignition[752]: no config URL provided Sep 13 10:13:42.886000 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 10:13:42.886008 ignition[752]: no config at "/usr/lib/ignition/user.ign" Sep 13 10:13:42.886038 ignition[752]: op(1): [started] loading QEMU firmware config module Sep 13 10:13:42.894036 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:13:42.886043 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 10:13:42.898317 ignition[752]: op(1): [finished] loading QEMU firmware config module Sep 13 10:13:42.938471 ignition[752]: parsing config with SHA512: dc6dd1577c208491cff8ebfb1151a468c39365d6c4165954e9a2a8fe97acf74e9740ffc16e6cb3046b5d69fc0933923434b747eb3721a67ac1f0720d7898acef Sep 13 10:13:42.944323 unknown[752]: fetched base config from "system" Sep 13 10:13:42.944350 unknown[752]: fetched user config from "qemu" Sep 13 10:13:42.945260 ignition[752]: fetch-offline: fetch-offline passed Sep 13 10:13:42.945368 ignition[752]: Ignition finished successfully Sep 13 10:13:42.951145 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:13:42.952752 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 10:13:42.954086 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 10:13:43.075122 ignition[859]: Ignition 2.22.0 Sep 13 10:13:43.075144 ignition[859]: Stage: kargs Sep 13 10:13:43.075538 ignition[859]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:13:43.075558 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:13:43.076313 ignition[859]: kargs: kargs passed Sep 13 10:13:43.076361 ignition[859]: Ignition finished successfully Sep 13 10:13:43.084265 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 10:13:43.086442 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 10:13:43.131428 ignition[867]: Ignition 2.22.0 Sep 13 10:13:43.131443 ignition[867]: Stage: disks Sep 13 10:13:43.131600 ignition[867]: no configs at "/usr/lib/ignition/base.d" Sep 13 10:13:43.131612 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:13:43.135614 ignition[867]: disks: disks passed Sep 13 10:13:43.136332 ignition[867]: Ignition finished successfully Sep 13 10:13:43.139171 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 10:13:43.140540 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 10:13:43.142579 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 10:13:43.143855 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:13:43.144833 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:13:43.147127 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:13:43.149278 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 10:13:43.178322 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 13 10:13:43.216327 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 10:13:43.219401 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 10:13:43.353997 kernel: EXT4-fs (vda9): mounted filesystem 1fad58d4-1271-484a-aa8e-8f7f5dca764c r/w with ordered data mode. Quota mode: none. Sep 13 10:13:43.354513 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 10:13:43.355346 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 10:13:43.358549 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:13:43.359493 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 10:13:43.361365 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 10:13:43.361412 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 10:13:43.361439 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:13:43.377599 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 10:13:43.381297 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 10:13:43.406018 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 13 10:13:43.408513 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:13:43.408545 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:13:43.412016 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:13:43.412044 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:13:43.414155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:13:43.444244 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 10:13:43.449032 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 13 10:13:43.454043 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 10:13:43.459111 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 10:13:43.557622 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 10:13:43.560591 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 10:13:43.562465 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 10:13:43.588180 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 10:13:43.589689 kernel: BTRFS info (device vda6): last unmount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:13:43.603618 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 10:13:43.621938 ignition[999]: INFO : Ignition 2.22.0 Sep 13 10:13:43.621938 ignition[999]: INFO : Stage: mount Sep 13 10:13:43.623867 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:13:43.623867 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:13:43.626531 ignition[999]: INFO : mount: mount passed Sep 13 10:13:43.627259 ignition[999]: INFO : Ignition finished successfully Sep 13 10:13:43.630288 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 10:13:43.631647 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 10:13:43.665540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 10:13:43.702009 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 13 10:13:43.704083 kernel: BTRFS info (device vda6): first mount of filesystem 69dbcaf3-1008-473f-af83-060bcefcf397 Sep 13 10:13:43.704107 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 10:13:43.709244 kernel: BTRFS info (device vda6): turning on async discard Sep 13 10:13:43.709270 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 10:13:43.711615 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 10:13:43.774634 ignition[1029]: INFO : Ignition 2.22.0 Sep 13 10:13:43.774634 ignition[1029]: INFO : Stage: files Sep 13 10:13:43.776607 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:13:43.776607 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:13:43.779769 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 13 10:13:43.781496 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 10:13:43.781496 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 10:13:43.786598 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 10:13:43.788196 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 10:13:43.789722 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 10:13:43.788651 unknown[1029]: wrote ssh authorized keys file for user: core Sep 13 10:13:43.792550 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 10:13:43.792550 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 13 10:13:43.830610 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 10:13:43.924841 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 13 10:13:43.924841 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 10:13:43.928984 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 10:13:44.180171 systemd-networkd[851]: eth0: Gained IPv6LL Sep 13 10:13:44.237624 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 10:13:44.768079 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 10:13:44.770774 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 10:13:44.770774 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 10:13:44.770774 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:13:44.770774 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 10:13:44.770774 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:13:44.770774 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 10:13:44.770774 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:13:44.770774 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 10:13:44.785291 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:13:44.785291 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 10:13:44.785291 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:13:44.785291 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:13:44.785291 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:13:44.785291 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 13 10:13:45.096492 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 10:13:46.015892 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 13 10:13:46.015892 ignition[1029]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 10:13:46.066832 ignition[1029]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:13:46.267179 ignition[1029]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 10:13:46.267179 ignition[1029]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 10:13:46.267179 ignition[1029]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 13 10:13:46.267179 ignition[1029]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:13:46.273874 ignition[1029]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 10:13:46.273874 ignition[1029]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 13 10:13:46.273874 ignition[1029]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 10:13:46.294919 ignition[1029]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:13:46.301230 ignition[1029]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 10:13:46.334360 ignition[1029]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 10:13:46.334360 ignition[1029]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 13 10:13:46.337262 ignition[1029]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 10:13:46.337262 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:13:46.337262 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 10:13:46.337262 ignition[1029]: INFO : files: files passed Sep 13 10:13:46.337262 ignition[1029]: INFO : Ignition finished successfully Sep 13 10:13:46.346361 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 10:13:46.348722 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 10:13:46.351091 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 10:13:46.369194 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 10:13:46.369437 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 10:13:46.373871 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 10:13:46.377575 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:13:46.377575 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:13:46.381121 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 10:13:46.384746 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:13:46.386535 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 10:13:46.389843 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 10:13:46.451673 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 10:13:46.451806 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 10:13:46.468677 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 10:13:46.469833 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 10:13:46.471952 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 10:13:46.475592 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 10:13:46.507658 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:13:46.511878 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 10:13:46.549842 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:13:46.550082 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:13:46.550444 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 10:13:46.550760 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 10:13:46.550897 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 10:13:46.558894 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 10:13:46.559061 systemd[1]: Stopped target basic.target - Basic System. Sep 13 10:13:46.561200 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 10:13:46.563128 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 10:13:46.566683 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 10:13:46.568002 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 10:13:46.571486 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 10:13:46.574717 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 10:13:46.574861 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 10:13:46.577512 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 10:13:46.577844 systemd[1]: Stopped target swap.target - Swaps. Sep 13 10:13:46.578363 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 10:13:46.578505 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 10:13:46.585799 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:13:46.585935 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:13:46.589057 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 10:13:46.590243 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:13:46.591540 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 10:13:46.591655 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 10:13:46.666172 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 10:13:46.666299 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 10:13:46.668611 systemd[1]: Stopped target paths.target - Path Units. Sep 13 10:13:46.669687 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 10:13:46.675024 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:13:46.675183 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 10:13:46.678751 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 10:13:46.679693 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 10:13:46.679784 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 10:13:46.681416 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 10:13:46.681515 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 10:13:46.683288 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 10:13:46.683392 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 10:13:46.686435 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 10:13:46.686541 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 10:13:46.691735 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 10:13:46.692802 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 10:13:46.692948 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:13:46.706330 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 10:13:46.708201 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 10:13:46.708348 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:13:46.710983 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 10:13:46.711175 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 10:13:46.718732 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 10:13:46.719228 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 10:13:46.733919 ignition[1085]: INFO : Ignition 2.22.0 Sep 13 10:13:46.733919 ignition[1085]: INFO : Stage: umount Sep 13 10:13:46.735826 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 10:13:46.735826 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 10:13:46.735826 ignition[1085]: INFO : umount: umount passed Sep 13 10:13:46.735826 ignition[1085]: INFO : Ignition finished successfully Sep 13 10:13:46.738050 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 10:13:46.738197 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 10:13:46.740282 systemd[1]: Stopped target network.target - Network. Sep 13 10:13:46.741146 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 10:13:46.741209 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 10:13:46.741837 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 10:13:46.741887 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 10:13:46.742058 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 10:13:46.742105 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 10:13:46.742373 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 10:13:46.742414 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 10:13:46.742793 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 10:13:46.743274 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 10:13:46.744601 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 10:13:46.751740 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 10:13:46.751858 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 10:13:46.756825 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 10:13:46.757151 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 10:13:46.757266 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 10:13:46.758252 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 10:13:46.758337 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 10:13:46.760173 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 10:13:46.760230 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:13:46.764146 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 10:13:46.764381 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 10:13:46.764496 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 10:13:46.769630 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 10:13:46.771196 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 10:13:46.774395 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 10:13:46.774446 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:13:46.781676 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 10:13:46.782681 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 10:13:46.782733 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 10:13:46.785237 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 10:13:46.785285 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:13:46.788679 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 10:13:46.788725 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 10:13:46.789638 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:13:46.790746 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 10:13:46.814008 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 10:13:46.814266 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:13:46.817679 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 10:13:46.817819 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 10:13:46.819278 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 10:13:46.819364 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 10:13:46.820471 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 10:13:46.820508 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:13:46.820765 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 10:13:46.820817 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 10:13:46.821599 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 10:13:46.821647 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 10:13:46.828236 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 10:13:46.828290 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 10:13:46.832093 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 10:13:46.873267 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 10:13:46.873353 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:13:46.876790 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 10:13:46.876852 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:13:46.880146 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 10:13:46.880210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:13:46.884044 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 10:13:46.884170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 10:13:46.887292 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 10:13:46.889993 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 10:13:46.911310 systemd[1]: Switching root. Sep 13 10:13:47.053788 systemd-journald[220]: Journal stopped Sep 13 10:13:51.131094 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 13 10:13:51.131154 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 10:13:51.131169 kernel: SELinux: policy capability open_perms=1 Sep 13 10:13:51.131185 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 10:13:51.131447 kernel: SELinux: policy capability always_check_network=0 Sep 13 10:13:51.131464 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 10:13:51.131479 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 10:13:51.131491 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 10:13:51.131503 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 10:13:51.131515 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 10:13:51.131531 kernel: audit: type=1403 audit(1757758430.240:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 10:13:51.131544 systemd[1]: Successfully loaded SELinux policy in 64.692ms. Sep 13 10:13:51.131569 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.692ms. Sep 13 10:13:51.131582 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 10:13:51.131595 systemd[1]: Detected virtualization kvm. Sep 13 10:13:51.131610 systemd[1]: Detected architecture x86-64. Sep 13 10:13:51.131623 systemd[1]: Detected first boot. Sep 13 10:13:51.131635 systemd[1]: Initializing machine ID from VM UUID. Sep 13 10:13:51.131647 zram_generator::config[1130]: No configuration found. Sep 13 10:13:51.131661 kernel: Guest personality initialized and is inactive Sep 13 10:13:51.131673 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 10:13:51.131685 kernel: Initialized host personality Sep 13 10:13:51.131696 kernel: NET: Registered PF_VSOCK protocol family Sep 13 10:13:51.131711 systemd[1]: Populated /etc with preset unit settings. Sep 13 10:13:51.131724 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 10:13:51.131742 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 10:13:51.131763 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 10:13:51.131776 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 10:13:51.131789 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 10:13:51.131801 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 10:13:51.131814 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 10:13:51.131826 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 10:13:51.131841 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 10:13:51.131853 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 10:13:51.131866 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 10:13:51.131878 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 10:13:51.131891 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 10:13:51.131904 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 10:13:51.131916 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 10:13:51.131941 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 10:13:51.131954 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 10:13:51.132075 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 10:13:51.132089 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 10:13:51.132101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 10:13:51.132114 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 10:13:51.132126 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 10:13:51.132138 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 10:13:51.132155 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 10:13:51.132171 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 10:13:51.132186 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 10:13:51.132208 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 10:13:51.132227 systemd[1]: Reached target slices.target - Slice Units. Sep 13 10:13:51.132249 systemd[1]: Reached target swap.target - Swaps. Sep 13 10:13:51.132272 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 10:13:51.132285 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 10:13:51.132297 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 10:13:51.132310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 10:13:51.132322 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 10:13:51.132337 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 10:13:51.132349 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 10:13:51.132362 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 10:13:51.132374 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 10:13:51.132386 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 10:13:51.132399 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:13:51.132411 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 10:13:51.132424 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 10:13:51.132438 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 10:13:51.132451 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 10:13:51.132464 systemd[1]: Reached target machines.target - Containers. Sep 13 10:13:51.132476 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 10:13:51.132488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:13:51.132500 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 10:13:51.132513 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 10:13:51.132525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:13:51.132537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:13:51.132551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:13:51.132564 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 10:13:51.132576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:13:51.132589 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 10:13:51.132601 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 10:13:51.132614 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 10:13:51.132627 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 10:13:51.132639 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 10:13:51.132654 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:13:51.132666 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 10:13:51.132678 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 10:13:51.132691 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 10:13:51.132703 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 10:13:51.132716 kernel: loop: module loaded Sep 13 10:13:51.132730 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 10:13:51.132742 kernel: ACPI: bus type drm_connector registered Sep 13 10:13:51.132754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 10:13:51.132766 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 10:13:51.132778 systemd[1]: Stopped verity-setup.service. Sep 13 10:13:51.132795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:13:51.132807 kernel: fuse: init (API version 7.41) Sep 13 10:13:51.132818 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 10:13:51.132833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 10:13:51.132845 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 10:13:51.132858 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 10:13:51.132870 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 10:13:51.132884 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 10:13:51.132900 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 10:13:51.132912 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 10:13:51.132932 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 10:13:51.132945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:13:51.132958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:13:51.132983 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:13:51.132995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:13:51.133008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:13:51.133023 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:13:51.133038 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 10:13:51.133051 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 10:13:51.133063 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:13:51.133075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:13:51.133087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 10:13:51.133100 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 10:13:51.133113 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 10:13:51.133125 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 10:13:51.133140 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 10:13:51.133174 systemd-journald[1201]: Collecting audit messages is disabled. Sep 13 10:13:51.133198 systemd-journald[1201]: Journal started Sep 13 10:13:51.133223 systemd-journald[1201]: Runtime Journal (/run/log/journal/6d1f4e0ad6184f8789d1a7137a3e3c2f) is 6M, max 48.6M, 42.5M free. Sep 13 10:13:50.827830 systemd[1]: Queued start job for default target multi-user.target. Sep 13 10:13:50.848430 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 10:13:50.848907 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 10:13:51.136745 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 10:13:51.136773 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 10:13:51.138318 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 10:13:51.142498 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 10:13:51.145000 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 10:13:51.148997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:13:51.161279 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 10:13:51.164629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:13:51.167988 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 10:13:51.168029 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:13:51.177084 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:13:51.198115 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 10:13:51.203039 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 10:13:51.205586 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 10:13:51.207222 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 10:13:51.208626 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 10:13:51.211043 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 10:13:51.214342 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 10:13:51.235475 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 10:13:51.240721 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 10:13:51.269007 kernel: loop0: detected capacity change from 0 to 128016 Sep 13 10:13:51.273401 systemd-journald[1201]: Time spent on flushing to /var/log/journal/6d1f4e0ad6184f8789d1a7137a3e3c2f is 25.746ms for 988 entries. Sep 13 10:13:51.273401 systemd-journald[1201]: System Journal (/var/log/journal/6d1f4e0ad6184f8789d1a7137a3e3c2f) is 8M, max 195.6M, 187.6M free. Sep 13 10:13:51.777536 systemd-journald[1201]: Received client request to flush runtime journal. Sep 13 10:13:51.777628 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 10:13:51.777661 kernel: loop1: detected capacity change from 0 to 110984 Sep 13 10:13:51.777687 kernel: loop2: detected capacity change from 0 to 229808 Sep 13 10:13:51.777707 kernel: loop3: detected capacity change from 0 to 128016 Sep 13 10:13:51.777726 kernel: loop4: detected capacity change from 0 to 110984 Sep 13 10:13:51.777744 kernel: loop5: detected capacity change from 0 to 229808 Sep 13 10:13:51.777766 zram_generator::config[1304]: No configuration found. Sep 13 10:13:51.275248 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:13:51.367771 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 10:13:51.369372 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 10:13:51.374878 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 10:13:51.472882 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 10:13:51.475636 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 10:13:51.518278 (sd-merge)[1264]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 10:13:51.518450 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 13 10:13:51.518468 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Sep 13 10:13:51.519201 (sd-merge)[1264]: Merged extensions into '/usr'. Sep 13 10:13:51.524677 systemd[1]: Reload requested from client PID 1230 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 10:13:51.524690 systemd[1]: Reloading... Sep 13 10:13:52.018587 systemd[1]: Reloading finished in 493 ms. Sep 13 10:13:52.047086 ldconfig[1226]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 10:13:52.079594 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 10:13:52.101952 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 10:13:52.103909 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 10:13:52.105671 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 10:13:52.107718 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 10:13:52.110339 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 10:13:52.119480 systemd[1]: Starting ensure-sysext.service... Sep 13 10:13:52.122023 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 10:13:52.165728 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Sep 13 10:13:52.165744 systemd[1]: Reloading... Sep 13 10:13:52.175389 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 10:13:52.175447 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 10:13:52.175843 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 10:13:52.176221 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 10:13:52.177457 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 10:13:52.177851 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 13 10:13:52.177955 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Sep 13 10:13:52.184691 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:13:52.184802 systemd-tmpfiles[1336]: Skipping /boot Sep 13 10:13:52.199882 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 10:13:52.200031 systemd-tmpfiles[1336]: Skipping /boot Sep 13 10:13:52.234008 zram_generator::config[1362]: No configuration found. Sep 13 10:13:52.464524 systemd[1]: Reloading finished in 298 ms. Sep 13 10:13:52.510334 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 10:13:52.520124 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:13:52.543951 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 10:13:52.547357 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 10:13:52.550832 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 10:13:52.555044 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 10:13:52.558706 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:13:52.559228 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:13:52.562245 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:13:52.565061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:13:52.569187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:13:52.570442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:13:52.570575 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:13:52.570697 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:13:52.575076 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:13:52.575358 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:13:52.582844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:13:52.583867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:13:52.595579 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:13:52.595922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:13:52.601792 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 10:13:52.607030 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 10:13:52.611227 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 10:13:52.621269 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:13:52.621614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 10:13:52.624283 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 10:13:52.624569 augenrules[1434]: No rules Sep 13 10:13:52.627202 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 10:13:52.631235 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 10:13:52.638239 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 10:13:52.639584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 10:13:52.639724 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 10:13:52.641924 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 10:13:52.645197 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 10:13:52.649256 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 10:13:52.650588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 10:13:52.653511 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:13:52.658264 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:13:52.660493 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 10:13:52.662871 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 10:13:52.663247 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 10:13:52.665850 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 10:13:52.666207 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 10:13:52.668808 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 10:13:52.669140 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 10:13:52.671440 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 10:13:52.671719 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 10:13:52.673710 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 10:13:52.680652 systemd[1]: Finished ensure-sysext.service. Sep 13 10:13:52.691408 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 10:13:52.691516 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 10:13:52.693058 systemd-udevd[1443]: Using default interface naming scheme 'v255'. Sep 13 10:13:52.694107 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 10:13:52.696625 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 10:13:52.723034 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 10:13:52.726661 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 10:13:52.737863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 10:13:52.817686 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 10:13:52.908064 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 10:13:52.912727 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 10:13:52.921077 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 10:13:52.950964 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 10:13:52.950828 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 10:13:52.957036 kernel: ACPI: button: Power Button [PWRF] Sep 13 10:13:52.966823 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 10:13:52.967099 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 10:13:52.991798 systemd-networkd[1476]: lo: Link UP Sep 13 10:13:52.991809 systemd-networkd[1476]: lo: Gained carrier Sep 13 10:13:52.995764 systemd-networkd[1476]: Enumeration completed Sep 13 10:13:52.995907 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 10:13:52.997369 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:13:52.997382 systemd-networkd[1476]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 10:13:52.998076 systemd-networkd[1476]: eth0: Link UP Sep 13 10:13:52.998345 systemd-networkd[1476]: eth0: Gained carrier Sep 13 10:13:52.998360 systemd-networkd[1476]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 10:13:52.999813 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 10:13:53.010213 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 10:13:53.026085 systemd-networkd[1476]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 10:13:53.031318 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 10:13:53.032735 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 10:13:54.106102 systemd-timesyncd[1454]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 10:13:54.106183 systemd-timesyncd[1454]: Initial clock synchronization to Sat 2025-09-13 10:13:54.105967 UTC. Sep 13 10:13:54.149575 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 10:13:54.159002 systemd-resolved[1408]: Positive Trust Anchors: Sep 13 10:13:54.159026 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 10:13:54.159064 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 10:13:54.166209 systemd-resolved[1408]: Defaulting to hostname 'linux'. Sep 13 10:13:54.170253 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 10:13:54.171888 systemd[1]: Reached target network.target - Network. Sep 13 10:13:54.173678 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 10:13:54.175720 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 10:13:54.177359 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 10:13:54.179307 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 10:13:54.181589 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 10:13:54.183235 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 10:13:54.185711 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 10:13:54.187230 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 10:13:54.191049 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 10:13:54.191081 systemd[1]: Reached target paths.target - Path Units. Sep 13 10:13:54.237831 systemd[1]: Reached target timers.target - Timer Units. Sep 13 10:13:54.241837 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 10:13:54.246813 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 10:13:54.254013 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 10:13:54.256242 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 10:13:54.257753 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 10:13:54.263313 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 10:13:54.264875 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 10:13:54.267443 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 10:13:54.273551 kernel: kvm_amd: TSC scaling supported Sep 13 10:13:54.273629 kernel: kvm_amd: Nested Virtualization enabled Sep 13 10:13:54.273687 kernel: kvm_amd: Nested Paging enabled Sep 13 10:13:54.274768 kernel: kvm_amd: LBR virtualization supported Sep 13 10:13:54.275530 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 10:13:54.275552 kernel: kvm_amd: Virtual GIF supported Sep 13 10:13:54.283620 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 10:13:54.284876 systemd[1]: Reached target basic.target - Basic System. Sep 13 10:13:54.286021 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:13:54.286079 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 10:13:54.288441 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 10:13:54.293335 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 10:13:54.298624 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 10:13:54.300069 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 10:13:54.302801 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 10:13:54.304121 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 10:13:54.308643 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 10:13:54.312004 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 10:13:54.314171 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 10:13:54.314541 jq[1528]: false Sep 13 10:13:54.318648 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 10:13:54.321709 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 10:13:54.326713 extend-filesystems[1529]: Found /dev/vda6 Sep 13 10:13:54.330593 kernel: EDAC MC: Ver: 3.0.0 Sep 13 10:13:54.331284 extend-filesystems[1529]: Found /dev/vda9 Sep 13 10:13:54.332209 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing passwd entry cache Sep 13 10:13:54.330926 oslogin_cache_refresh[1530]: Refreshing passwd entry cache Sep 13 10:13:54.334653 extend-filesystems[1529]: Checking size of /dev/vda9 Sep 13 10:13:54.337108 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 10:13:54.339536 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting users, quitting Sep 13 10:13:54.339536 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:13:54.339536 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Refreshing group entry cache Sep 13 10:13:54.339293 oslogin_cache_refresh[1530]: Failure getting users, quitting Sep 13 10:13:54.339322 oslogin_cache_refresh[1530]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 10:13:54.339422 oslogin_cache_refresh[1530]: Refreshing group entry cache Sep 13 10:13:54.341343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 10:13:54.343912 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 10:13:54.344541 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 10:13:54.347594 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Failure getting groups, quitting Sep 13 10:13:54.347594 google_oslogin_nss_cache[1530]: oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:13:54.346942 oslogin_cache_refresh[1530]: Failure getting groups, quitting Sep 13 10:13:54.346966 oslogin_cache_refresh[1530]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 10:13:54.347844 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 10:13:54.351582 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 10:13:54.356555 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 10:13:54.358297 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 10:13:54.359647 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 10:13:54.360064 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 10:13:54.360354 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 10:13:54.365019 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 10:13:54.365160 extend-filesystems[1529]: Resized partition /dev/vda9 Sep 13 10:13:54.369316 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 10:13:54.373284 extend-filesystems[1559]: resize2fs 1.47.3 (8-Jul-2025) Sep 13 10:13:54.374733 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 10:13:54.375378 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 10:13:54.377520 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 10:13:54.382288 jq[1550]: true Sep 13 10:13:54.389520 (ntainerd)[1560]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 10:13:54.407555 update_engine[1546]: I20250913 10:13:54.406625 1546 main.cc:92] Flatcar Update Engine starting Sep 13 10:13:54.413593 jq[1564]: true Sep 13 10:13:54.419349 tar[1558]: linux-amd64/LICENSE Sep 13 10:13:54.420028 tar[1558]: linux-amd64/helm Sep 13 10:13:54.427551 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 10:13:54.452918 extend-filesystems[1559]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 10:13:54.452918 extend-filesystems[1559]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 10:13:54.452918 extend-filesystems[1559]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 10:13:54.453765 extend-filesystems[1529]: Resized filesystem in /dev/vda9 Sep 13 10:13:54.457902 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 10:13:54.458179 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 10:13:54.461609 dbus-daemon[1526]: [system] SELinux support is enabled Sep 13 10:13:54.463562 systemd-logind[1542]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 10:13:54.463588 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 10:13:54.463882 systemd-logind[1542]: New seat seat0. Sep 13 10:13:54.476117 update_engine[1546]: I20250913 10:13:54.476050 1546 update_check_scheduler.cc:74] Next update check in 6m46s Sep 13 10:13:54.531212 bash[1592]: Updated "/home/core/.ssh/authorized_keys" Sep 13 10:13:54.533915 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 10:13:54.654755 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 10:13:54.658531 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 10:13:54.659967 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 10:13:54.661403 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 10:13:54.662933 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 10:13:54.671721 dbus-daemon[1526]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 10:13:54.672166 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 10:13:54.673823 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 10:13:54.673934 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 10:13:54.673961 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 10:13:54.675433 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 10:13:54.675455 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 10:13:54.676971 systemd[1]: Started update-engine.service - Update Engine. Sep 13 10:13:54.682092 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 10:13:54.691334 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 10:13:54.691632 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 10:13:54.700241 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 10:13:54.736841 containerd[1560]: time="2025-09-13T10:13:54Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 10:13:54.743557 containerd[1560]: time="2025-09-13T10:13:54.741660978Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 13 10:13:54.763200 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 10:13:54.766694 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 10:13:54.768917 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 10:13:54.770396 containerd[1560]: time="2025-09-13T10:13:54.770343057Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.667µs" Sep 13 10:13:54.770467 containerd[1560]: time="2025-09-13T10:13:54.770451110Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 10:13:54.770560 containerd[1560]: time="2025-09-13T10:13:54.770544014Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 10:13:54.770832 containerd[1560]: time="2025-09-13T10:13:54.770813500Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 10:13:54.770901 containerd[1560]: time="2025-09-13T10:13:54.770888049Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 10:13:54.771005 containerd[1560]: time="2025-09-13T10:13:54.770989600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:13:54.771184 containerd[1560]: time="2025-09-13T10:13:54.771164358Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 10:13:54.771251 containerd[1560]: time="2025-09-13T10:13:54.771237775Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:13:54.771663 containerd[1560]: time="2025-09-13T10:13:54.771641432Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 10:13:54.771726 containerd[1560]: time="2025-09-13T10:13:54.771712506Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:13:54.771780 containerd[1560]: time="2025-09-13T10:13:54.771766968Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 10:13:54.771843 containerd[1560]: time="2025-09-13T10:13:54.771830056Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 10:13:54.772124 containerd[1560]: time="2025-09-13T10:13:54.772106555Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 10:13:54.772173 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 10:13:54.773858 containerd[1560]: time="2025-09-13T10:13:54.773837542Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:13:54.773943 containerd[1560]: time="2025-09-13T10:13:54.773928041Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 10:13:54.773994 containerd[1560]: time="2025-09-13T10:13:54.773982975Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 10:13:54.774075 containerd[1560]: time="2025-09-13T10:13:54.774061402Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 10:13:54.774652 containerd[1560]: time="2025-09-13T10:13:54.774529850Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 10:13:54.774652 containerd[1560]: time="2025-09-13T10:13:54.774619478Z" level=info msg="metadata content store policy set" policy=shared Sep 13 10:13:54.774782 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 10:13:54.787589 containerd[1560]: time="2025-09-13T10:13:54.787531438Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 10:13:54.787784 containerd[1560]: time="2025-09-13T10:13:54.787721975Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 10:13:54.787784 containerd[1560]: time="2025-09-13T10:13:54.787742524Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 10:13:54.787896 containerd[1560]: time="2025-09-13T10:13:54.787881264Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 10:13:54.788033 containerd[1560]: time="2025-09-13T10:13:54.787965341Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 10:13:54.788033 containerd[1560]: time="2025-09-13T10:13:54.787981241Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 10:13:54.788033 containerd[1560]: time="2025-09-13T10:13:54.787997051Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 10:13:54.788188 containerd[1560]: time="2025-09-13T10:13:54.788010827Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 10:13:54.788188 containerd[1560]: time="2025-09-13T10:13:54.788135300Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 10:13:54.788188 containerd[1560]: time="2025-09-13T10:13:54.788148435Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 10:13:54.788188 containerd[1560]: time="2025-09-13T10:13:54.788158614Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 10:13:54.788375 containerd[1560]: time="2025-09-13T10:13:54.788173011Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 10:13:54.788579 containerd[1560]: time="2025-09-13T10:13:54.788562181Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 10:13:54.788670 containerd[1560]: time="2025-09-13T10:13:54.788656538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 10:13:54.788795 containerd[1560]: time="2025-09-13T10:13:54.788732981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 10:13:54.788795 containerd[1560]: time="2025-09-13T10:13:54.788753129Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 10:13:54.788795 containerd[1560]: time="2025-09-13T10:13:54.788767296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 10:13:54.788906 containerd[1560]: time="2025-09-13T10:13:54.788890857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 10:13:54.789050 containerd[1560]: time="2025-09-13T10:13:54.788982640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 10:13:54.789050 containerd[1560]: time="2025-09-13T10:13:54.788998950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 10:13:54.789050 containerd[1560]: time="2025-09-13T10:13:54.789010973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 10:13:54.789050 containerd[1560]: time="2025-09-13T10:13:54.789023316Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 10:13:54.789243 containerd[1560]: time="2025-09-13T10:13:54.789034928Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 10:13:54.789380 containerd[1560]: time="2025-09-13T10:13:54.789364847Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 10:13:54.789488 containerd[1560]: time="2025-09-13T10:13:54.789474232Z" level=info msg="Start snapshots syncer" Sep 13 10:13:54.789622 containerd[1560]: time="2025-09-13T10:13:54.789586693Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 10:13:54.790131 containerd[1560]: time="2025-09-13T10:13:54.790080278Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 10:13:54.790426 containerd[1560]: time="2025-09-13T10:13:54.790248123Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 10:13:54.790637 containerd[1560]: time="2025-09-13T10:13:54.790590546Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 10:13:54.790959 containerd[1560]: time="2025-09-13T10:13:54.790849431Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 10:13:54.790959 containerd[1560]: time="2025-09-13T10:13:54.790875971Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 10:13:54.790959 containerd[1560]: time="2025-09-13T10:13:54.790900727Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 10:13:54.790959 containerd[1560]: time="2025-09-13T10:13:54.790913782Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 10:13:54.790959 containerd[1560]: time="2025-09-13T10:13:54.790925865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 10:13:54.791116 containerd[1560]: time="2025-09-13T10:13:54.790941043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 10:13:54.791264 containerd[1560]: time="2025-09-13T10:13:54.791159803Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 10:13:54.791264 containerd[1560]: time="2025-09-13T10:13:54.791200209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 10:13:54.791264 containerd[1560]: time="2025-09-13T10:13:54.791212592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 10:13:54.791264 containerd[1560]: time="2025-09-13T10:13:54.791224354Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 10:13:54.791411 containerd[1560]: time="2025-09-13T10:13:54.791394333Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:13:54.791572 containerd[1560]: time="2025-09-13T10:13:54.791554474Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 10:13:54.792156 containerd[1560]: time="2025-09-13T10:13:54.791615037Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:13:54.792156 containerd[1560]: time="2025-09-13T10:13:54.791630386Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 10:13:54.792156 containerd[1560]: time="2025-09-13T10:13:54.791640765Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 10:13:54.792156 containerd[1560]: time="2025-09-13T10:13:54.791668117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 10:13:54.792156 containerd[1560]: time="2025-09-13T10:13:54.791681873Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 10:13:54.792156 containerd[1560]: time="2025-09-13T10:13:54.791703974Z" level=info msg="runtime interface created" Sep 13 10:13:54.792156 containerd[1560]: time="2025-09-13T10:13:54.791709985Z" level=info msg="created NRI interface" Sep 13 10:13:54.792156 containerd[1560]: time="2025-09-13T10:13:54.791719263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 10:13:54.792156 containerd[1560]: time="2025-09-13T10:13:54.791732127Z" level=info msg="Connect containerd service" Sep 13 10:13:54.792156 containerd[1560]: time="2025-09-13T10:13:54.791759558Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 10:13:54.793280 containerd[1560]: time="2025-09-13T10:13:54.793255594Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:13:54.959229 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 10:13:54.962712 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:38578.service - OpenSSH per-connection server daemon (10.0.0.1:38578). Sep 13 10:13:54.973667 tar[1558]: linux-amd64/README.md Sep 13 10:13:54.978070 containerd[1560]: time="2025-09-13T10:13:54.978012335Z" level=info msg="Start subscribing containerd event" Sep 13 10:13:54.978268 containerd[1560]: time="2025-09-13T10:13:54.978222870Z" level=info msg="Start recovering state" Sep 13 10:13:54.978489 containerd[1560]: time="2025-09-13T10:13:54.978292942Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 10:13:54.978554 containerd[1560]: time="2025-09-13T10:13:54.978459955Z" level=info msg="Start event monitor" Sep 13 10:13:54.978577 containerd[1560]: time="2025-09-13T10:13:54.978560924Z" level=info msg="Start cni network conf syncer for default" Sep 13 10:13:54.978577 containerd[1560]: time="2025-09-13T10:13:54.978571354Z" level=info msg="Start streaming server" Sep 13 10:13:54.978625 containerd[1560]: time="2025-09-13T10:13:54.978594207Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 10:13:54.978625 containerd[1560]: time="2025-09-13T10:13:54.978611028Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 10:13:54.978672 containerd[1560]: time="2025-09-13T10:13:54.978627840Z" level=info msg="runtime interface starting up..." Sep 13 10:13:54.978672 containerd[1560]: time="2025-09-13T10:13:54.978636847Z" level=info msg="starting plugins..." Sep 13 10:13:54.978672 containerd[1560]: time="2025-09-13T10:13:54.978658207Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 10:13:54.978973 containerd[1560]: time="2025-09-13T10:13:54.978831171Z" level=info msg="containerd successfully booted in 0.261972s" Sep 13 10:13:54.979578 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 10:13:55.000910 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 10:13:55.062951 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 38578 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:13:55.065264 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:13:55.072824 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 10:13:55.075321 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 10:13:55.084576 systemd-logind[1542]: New session 1 of user core. Sep 13 10:13:55.100325 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 10:13:55.104975 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 10:13:55.124800 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 10:13:55.127701 systemd-logind[1542]: New session c1 of user core. Sep 13 10:13:55.292476 systemd[1651]: Queued start job for default target default.target. Sep 13 10:13:55.304744 systemd[1651]: Created slice app.slice - User Application Slice. Sep 13 10:13:55.304773 systemd[1651]: Reached target paths.target - Paths. Sep 13 10:13:55.304825 systemd[1651]: Reached target timers.target - Timers. Sep 13 10:13:55.306421 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 10:13:55.317758 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 10:13:55.317931 systemd[1651]: Reached target sockets.target - Sockets. Sep 13 10:13:55.317991 systemd[1651]: Reached target basic.target - Basic System. Sep 13 10:13:55.318050 systemd[1651]: Reached target default.target - Main User Target. Sep 13 10:13:55.318098 systemd[1651]: Startup finished in 181ms. Sep 13 10:13:55.318200 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 10:13:55.329644 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 10:13:55.414630 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:38592.service - OpenSSH per-connection server daemon (10.0.0.1:38592). Sep 13 10:13:55.474687 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 38592 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:13:55.476535 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:13:55.482103 systemd-logind[1542]: New session 2 of user core. Sep 13 10:13:55.498809 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 10:13:55.555431 sshd[1665]: Connection closed by 10.0.0.1 port 38592 Sep 13 10:13:55.555976 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Sep 13 10:13:55.565324 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:38592.service: Deactivated successfully. Sep 13 10:13:55.567056 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 10:13:55.567934 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Sep 13 10:13:55.570680 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:38604.service - OpenSSH per-connection server daemon (10.0.0.1:38604). Sep 13 10:13:55.572926 systemd-logind[1542]: Removed session 2. Sep 13 10:13:55.626031 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 38604 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:13:55.627444 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:13:55.632793 systemd-logind[1542]: New session 3 of user core. Sep 13 10:13:55.642862 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 10:13:55.697818 sshd[1674]: Connection closed by 10.0.0.1 port 38604 Sep 13 10:13:55.698174 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Sep 13 10:13:55.702874 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:38604.service: Deactivated successfully. Sep 13 10:13:55.704892 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 10:13:55.705641 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Sep 13 10:13:55.706918 systemd-logind[1542]: Removed session 3. Sep 13 10:13:56.050000 systemd-networkd[1476]: eth0: Gained IPv6LL Sep 13 10:13:56.053915 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 10:13:56.055889 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 10:13:56.058760 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 10:13:56.061281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:13:56.063783 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 10:13:56.102764 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 10:13:56.105336 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 10:13:56.105658 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 10:13:56.108127 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 10:13:57.133651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:13:57.135546 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 10:13:57.136983 systemd[1]: Startup finished in 3.450s (kernel) + 10.606s (initrd) + 5.904s (userspace) = 19.961s. Sep 13 10:13:57.176184 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:13:57.898194 kubelet[1702]: E0913 10:13:57.898105 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:13:57.902688 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:13:57.902883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:13:57.903261 systemd[1]: kubelet.service: Consumed 1.624s CPU time, 267.9M memory peak. Sep 13 10:14:05.710796 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:51760.service - OpenSSH per-connection server daemon (10.0.0.1:51760). Sep 13 10:14:05.775457 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 51760 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:14:05.776926 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:05.781628 systemd-logind[1542]: New session 4 of user core. Sep 13 10:14:05.792671 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 10:14:05.846703 sshd[1718]: Connection closed by 10.0.0.1 port 51760 Sep 13 10:14:05.847198 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:05.861842 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:51760.service: Deactivated successfully. Sep 13 10:14:05.864614 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 10:14:05.865673 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Sep 13 10:14:05.870052 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:51772.service - OpenSSH per-connection server daemon (10.0.0.1:51772). Sep 13 10:14:05.870837 systemd-logind[1542]: Removed session 4. Sep 13 10:14:05.928921 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 51772 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:14:05.931064 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:05.936561 systemd-logind[1542]: New session 5 of user core. Sep 13 10:14:05.943652 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 10:14:05.992309 sshd[1727]: Connection closed by 10.0.0.1 port 51772 Sep 13 10:14:05.992546 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:06.003880 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:51772.service: Deactivated successfully. Sep 13 10:14:06.005594 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 10:14:06.006300 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Sep 13 10:14:06.008934 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:51774.service - OpenSSH per-connection server daemon (10.0.0.1:51774). Sep 13 10:14:06.009709 systemd-logind[1542]: Removed session 5. Sep 13 10:14:06.068077 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 51774 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:14:06.069378 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:06.073658 systemd-logind[1542]: New session 6 of user core. Sep 13 10:14:06.081645 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 10:14:06.133867 sshd[1736]: Connection closed by 10.0.0.1 port 51774 Sep 13 10:14:06.134203 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:06.146957 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:51774.service: Deactivated successfully. Sep 13 10:14:06.148661 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 10:14:06.149353 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Sep 13 10:14:06.151772 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:51788.service - OpenSSH per-connection server daemon (10.0.0.1:51788). Sep 13 10:14:06.152386 systemd-logind[1542]: Removed session 6. Sep 13 10:14:06.202595 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 51788 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:14:06.203925 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:06.207970 systemd-logind[1542]: New session 7 of user core. Sep 13 10:14:06.224623 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 10:14:06.281568 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 10:14:06.281952 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:14:06.304978 sudo[1746]: pam_unix(sudo:session): session closed for user root Sep 13 10:14:06.306517 sshd[1745]: Connection closed by 10.0.0.1 port 51788 Sep 13 10:14:06.306938 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:06.318048 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:51788.service: Deactivated successfully. Sep 13 10:14:06.319734 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 10:14:06.320578 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Sep 13 10:14:06.323459 systemd[1]: Started sshd@7-10.0.0.19:22-10.0.0.1:51804.service - OpenSSH per-connection server daemon (10.0.0.1:51804). Sep 13 10:14:06.323984 systemd-logind[1542]: Removed session 7. Sep 13 10:14:06.383155 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 51804 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:14:06.384487 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:06.388802 systemd-logind[1542]: New session 8 of user core. Sep 13 10:14:06.403707 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 10:14:06.458217 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 10:14:06.458542 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:14:06.465165 sudo[1758]: pam_unix(sudo:session): session closed for user root Sep 13 10:14:06.471424 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 10:14:06.471747 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:14:06.482400 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 10:14:06.537415 augenrules[1780]: No rules Sep 13 10:14:06.538455 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 10:14:06.538794 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 10:14:06.540201 sudo[1757]: pam_unix(sudo:session): session closed for user root Sep 13 10:14:06.541893 sshd[1756]: Connection closed by 10.0.0.1 port 51804 Sep 13 10:14:06.542328 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:06.551194 systemd[1]: sshd@7-10.0.0.19:22-10.0.0.1:51804.service: Deactivated successfully. Sep 13 10:14:06.553102 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 10:14:06.553843 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Sep 13 10:14:06.556541 systemd[1]: Started sshd@8-10.0.0.19:22-10.0.0.1:51806.service - OpenSSH per-connection server daemon (10.0.0.1:51806). Sep 13 10:14:06.557256 systemd-logind[1542]: Removed session 8. Sep 13 10:14:06.607754 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 51806 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:14:06.609204 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:14:06.614153 systemd-logind[1542]: New session 9 of user core. Sep 13 10:14:06.623651 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 10:14:06.678248 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 10:14:06.678683 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 10:14:07.271376 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 10:14:07.300890 (dockerd)[1814]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 10:14:07.983272 dockerd[1814]: time="2025-09-13T10:14:07.983192406Z" level=info msg="Starting up" Sep 13 10:14:07.984535 dockerd[1814]: time="2025-09-13T10:14:07.984445747Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 10:14:07.988700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 10:14:07.990788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:08.018160 dockerd[1814]: time="2025-09-13T10:14:08.018111494Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 13 10:14:08.359914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:08.364840 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:14:08.485958 dockerd[1814]: time="2025-09-13T10:14:08.485900132Z" level=info msg="Loading containers: start." Sep 13 10:14:08.606090 kubelet[1846]: E0913 10:14:08.606002 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:14:08.613011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:14:08.613228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:14:08.613624 systemd[1]: kubelet.service: Consumed 313ms CPU time, 110.4M memory peak. Sep 13 10:14:08.660564 kernel: Initializing XFRM netlink socket Sep 13 10:14:08.922849 systemd-networkd[1476]: docker0: Link UP Sep 13 10:14:08.928627 dockerd[1814]: time="2025-09-13T10:14:08.928589701Z" level=info msg="Loading containers: done." Sep 13 10:14:08.946104 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1161026148-merged.mount: Deactivated successfully. Sep 13 10:14:08.948743 dockerd[1814]: time="2025-09-13T10:14:08.948690905Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 10:14:08.948820 dockerd[1814]: time="2025-09-13T10:14:08.948798256Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 13 10:14:08.948940 dockerd[1814]: time="2025-09-13T10:14:08.948916398Z" level=info msg="Initializing buildkit" Sep 13 10:14:08.980750 dockerd[1814]: time="2025-09-13T10:14:08.980693052Z" level=info msg="Completed buildkit initialization" Sep 13 10:14:08.986082 dockerd[1814]: time="2025-09-13T10:14:08.986027608Z" level=info msg="Daemon has completed initialization" Sep 13 10:14:08.986557 dockerd[1814]: time="2025-09-13T10:14:08.986130982Z" level=info msg="API listen on /run/docker.sock" Sep 13 10:14:08.986307 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 10:14:10.250056 containerd[1560]: time="2025-09-13T10:14:10.250011060Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 13 10:14:10.909895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280339307.mount: Deactivated successfully. Sep 13 10:14:12.880834 containerd[1560]: time="2025-09-13T10:14:12.880743943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:12.881313 containerd[1560]: time="2025-09-13T10:14:12.881266132Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 13 10:14:12.882335 containerd[1560]: time="2025-09-13T10:14:12.882292317Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:12.885114 containerd[1560]: time="2025-09-13T10:14:12.885080457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:12.886007 containerd[1560]: time="2025-09-13T10:14:12.885976227Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.635922498s" Sep 13 10:14:12.886068 containerd[1560]: time="2025-09-13T10:14:12.886019358Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 13 10:14:12.886748 containerd[1560]: time="2025-09-13T10:14:12.886722356Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 13 10:14:14.133037 containerd[1560]: time="2025-09-13T10:14:14.132975351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:14.133888 containerd[1560]: time="2025-09-13T10:14:14.133840774Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 13 10:14:14.135192 containerd[1560]: time="2025-09-13T10:14:14.135142816Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:14.137496 containerd[1560]: time="2025-09-13T10:14:14.137448651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:14.138569 containerd[1560]: time="2025-09-13T10:14:14.138534358Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.251779731s" Sep 13 10:14:14.138609 containerd[1560]: time="2025-09-13T10:14:14.138570215Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 13 10:14:14.139141 containerd[1560]: time="2025-09-13T10:14:14.139113184Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 13 10:14:15.919015 containerd[1560]: time="2025-09-13T10:14:15.918933618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:15.919808 containerd[1560]: time="2025-09-13T10:14:15.919762533Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 13 10:14:15.920966 containerd[1560]: time="2025-09-13T10:14:15.920926286Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:15.923734 containerd[1560]: time="2025-09-13T10:14:15.923678328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:15.926517 containerd[1560]: time="2025-09-13T10:14:15.924876375Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.785483327s" Sep 13 10:14:15.926517 containerd[1560]: time="2025-09-13T10:14:15.924946066Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 13 10:14:15.927136 containerd[1560]: time="2025-09-13T10:14:15.927084607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 13 10:14:17.506693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536331955.mount: Deactivated successfully. Sep 13 10:14:18.362845 containerd[1560]: time="2025-09-13T10:14:18.362777821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:18.363603 containerd[1560]: time="2025-09-13T10:14:18.363577531Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 13 10:14:18.364992 containerd[1560]: time="2025-09-13T10:14:18.364956267Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:18.367257 containerd[1560]: time="2025-09-13T10:14:18.367216457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:18.367721 containerd[1560]: time="2025-09-13T10:14:18.367666982Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.440542279s" Sep 13 10:14:18.367721 containerd[1560]: time="2025-09-13T10:14:18.367698691Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 13 10:14:18.368270 containerd[1560]: time="2025-09-13T10:14:18.368234156Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 13 10:14:18.769895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 10:14:18.771596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:19.004530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:19.009530 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:14:19.111218 kubelet[2131]: E0913 10:14:19.111040 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:14:19.115477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:14:19.115702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:14:19.116118 systemd[1]: kubelet.service: Consumed 298ms CPU time, 110.3M memory peak. Sep 13 10:14:19.367967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248201177.mount: Deactivated successfully. Sep 13 10:14:22.255843 containerd[1560]: time="2025-09-13T10:14:22.255751537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:22.257445 containerd[1560]: time="2025-09-13T10:14:22.257401131Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 13 10:14:22.259326 containerd[1560]: time="2025-09-13T10:14:22.259301356Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:22.262172 containerd[1560]: time="2025-09-13T10:14:22.262104323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:22.263035 containerd[1560]: time="2025-09-13T10:14:22.262991367Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.894728918s" Sep 13 10:14:22.263035 containerd[1560]: time="2025-09-13T10:14:22.263019390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 13 10:14:22.263648 containerd[1560]: time="2025-09-13T10:14:22.263574270Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 10:14:22.781222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1749841506.mount: Deactivated successfully. Sep 13 10:14:22.788785 containerd[1560]: time="2025-09-13T10:14:22.788733870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:14:22.789891 containerd[1560]: time="2025-09-13T10:14:22.789865552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 10:14:22.791368 containerd[1560]: time="2025-09-13T10:14:22.791209022Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:14:23.066178 containerd[1560]: time="2025-09-13T10:14:23.066013667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 10:14:23.067133 containerd[1560]: time="2025-09-13T10:14:23.067078815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 803.467255ms" Sep 13 10:14:23.067184 containerd[1560]: time="2025-09-13T10:14:23.067137054Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 10:14:23.067821 containerd[1560]: time="2025-09-13T10:14:23.067782585Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 13 10:14:24.239617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2109325217.mount: Deactivated successfully. Sep 13 10:14:29.270379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 10:14:29.273666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:29.504742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:29.518902 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 10:14:29.579415 kubelet[2255]: E0913 10:14:29.579209 2255 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 10:14:29.584299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 10:14:29.584544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 10:14:29.585106 systemd[1]: kubelet.service: Consumed 252ms CPU time, 110.7M memory peak. Sep 13 10:14:31.294816 containerd[1560]: time="2025-09-13T10:14:31.294747171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:31.295565 containerd[1560]: time="2025-09-13T10:14:31.295486124Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 13 10:14:31.296613 containerd[1560]: time="2025-09-13T10:14:31.296555841Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:31.299301 containerd[1560]: time="2025-09-13T10:14:31.299260524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:14:31.300257 containerd[1560]: time="2025-09-13T10:14:31.300220930Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 8.232394323s" Sep 13 10:14:31.300257 containerd[1560]: time="2025-09-13T10:14:31.300253322Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 13 10:14:34.026520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:34.026731 systemd[1]: kubelet.service: Consumed 252ms CPU time, 110.7M memory peak. Sep 13 10:14:34.029927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:34.059553 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-9.scope)... Sep 13 10:14:34.059564 systemd[1]: Reloading... Sep 13 10:14:34.160571 zram_generator::config[2343]: No configuration found. Sep 13 10:14:34.578365 systemd[1]: Reloading finished in 518 ms. Sep 13 10:14:34.644212 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 10:14:34.644333 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 10:14:34.644733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:34.644795 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.3M memory peak. Sep 13 10:14:34.646710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:35.052273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:35.064862 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:14:35.116109 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:14:35.116109 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 10:14:35.116109 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:14:35.116656 kubelet[2391]: I0913 10:14:35.116165 2391 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:14:35.505327 kubelet[2391]: I0913 10:14:35.505175 2391 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 10:14:35.505327 kubelet[2391]: I0913 10:14:35.505213 2391 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:14:35.505528 kubelet[2391]: I0913 10:14:35.505484 2391 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 10:14:35.593679 kubelet[2391]: E0913 10:14:35.593592 2391 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 10:14:35.594872 kubelet[2391]: I0913 10:14:35.594828 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:14:35.603839 kubelet[2391]: I0913 10:14:35.603797 2391 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:14:35.610325 kubelet[2391]: I0913 10:14:35.610277 2391 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:14:35.610801 kubelet[2391]: I0913 10:14:35.610750 2391 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:14:35.611085 kubelet[2391]: I0913 10:14:35.610789 2391 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:14:35.611293 kubelet[2391]: I0913 10:14:35.611100 2391 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:14:35.611293 kubelet[2391]: I0913 10:14:35.611114 2391 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 10:14:35.612931 kubelet[2391]: I0913 10:14:35.612897 2391 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:14:35.616562 kubelet[2391]: I0913 10:14:35.616527 2391 kubelet.go:480] "Attempting to sync node with API server" Sep 13 10:14:35.616562 kubelet[2391]: I0913 10:14:35.616558 2391 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:14:35.616644 kubelet[2391]: I0913 10:14:35.616609 2391 kubelet.go:386] "Adding apiserver pod source" Sep 13 10:14:35.616670 kubelet[2391]: I0913 10:14:35.616646 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:14:35.624068 kubelet[2391]: E0913 10:14:35.624004 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 10:14:35.624322 kubelet[2391]: E0913 10:14:35.624279 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 10:14:35.626598 kubelet[2391]: I0913 10:14:35.626535 2391 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:14:35.627460 kubelet[2391]: I0913 10:14:35.627401 2391 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 10:14:35.628730 kubelet[2391]: W0913 10:14:35.628691 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 10:14:35.632017 kubelet[2391]: I0913 10:14:35.631998 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 10:14:35.632077 kubelet[2391]: I0913 10:14:35.632059 2391 server.go:1289] "Started kubelet" Sep 13 10:14:35.632367 kubelet[2391]: I0913 10:14:35.632330 2391 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:14:35.650531 kubelet[2391]: I0913 10:14:35.650233 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:14:35.650718 kubelet[2391]: I0913 10:14:35.650679 2391 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:14:35.651737 kubelet[2391]: I0913 10:14:35.651690 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:14:35.655716 kubelet[2391]: I0913 10:14:35.655661 2391 server.go:317] "Adding debug handlers to kubelet server" Sep 13 10:14:35.656882 kubelet[2391]: I0913 10:14:35.656858 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:14:35.657288 kubelet[2391]: E0913 10:14:35.657126 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:35.657288 kubelet[2391]: I0913 10:14:35.657184 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 10:14:35.657511 kubelet[2391]: I0913 10:14:35.657451 2391 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 10:14:35.657656 kubelet[2391]: I0913 10:14:35.657562 2391 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:14:35.658263 kubelet[2391]: E0913 10:14:35.658064 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 10:14:35.658310 kubelet[2391]: I0913 10:14:35.658280 2391 factory.go:223] Registration of the systemd container factory successfully Sep 13 10:14:35.658393 kubelet[2391]: I0913 10:14:35.658363 2391 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:14:35.659362 kubelet[2391]: E0913 10:14:35.657649 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.19:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.19:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864d00544fa87d8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 10:14:35.632019416 +0000 UTC m=+0.562035191,LastTimestamp:2025-09-13 10:14:35.632019416 +0000 UTC m=+0.562035191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 10:14:35.659709 kubelet[2391]: E0913 10:14:35.659473 2391 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:14:35.659776 kubelet[2391]: E0913 10:14:35.659749 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="200ms" Sep 13 10:14:35.659929 kubelet[2391]: I0913 10:14:35.659911 2391 factory.go:223] Registration of the containerd container factory successfully Sep 13 10:14:35.676220 kubelet[2391]: I0913 10:14:35.676185 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 10:14:35.676220 kubelet[2391]: I0913 10:14:35.676203 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 10:14:35.676220 kubelet[2391]: I0913 10:14:35.676220 2391 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:14:35.677618 kubelet[2391]: I0913 10:14:35.677571 2391 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 10:14:35.680073 kubelet[2391]: I0913 10:14:35.679259 2391 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 10:14:35.680073 kubelet[2391]: I0913 10:14:35.679298 2391 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 10:14:35.680073 kubelet[2391]: I0913 10:14:35.679345 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 10:14:35.680073 kubelet[2391]: I0913 10:14:35.679358 2391 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 10:14:35.680073 kubelet[2391]: E0913 10:14:35.679416 2391 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:14:35.680073 kubelet[2391]: E0913 10:14:35.679929 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 10:14:35.758394 kubelet[2391]: E0913 10:14:35.758265 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:35.779765 kubelet[2391]: E0913 10:14:35.779703 2391 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:14:35.859175 kubelet[2391]: E0913 10:14:35.859107 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:35.860841 kubelet[2391]: E0913 10:14:35.860787 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="400ms" Sep 13 10:14:35.959220 kubelet[2391]: E0913 10:14:35.959184 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:35.980578 kubelet[2391]: E0913 10:14:35.980549 2391 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:14:36.059985 kubelet[2391]: E0913 10:14:36.059943 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:36.161101 kubelet[2391]: E0913 10:14:36.161054 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:36.261718 kubelet[2391]: E0913 10:14:36.261650 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:36.262210 kubelet[2391]: E0913 10:14:36.262156 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="800ms" Sep 13 10:14:36.349036 kubelet[2391]: I0913 10:14:36.348924 2391 policy_none.go:49] "None policy: Start" Sep 13 10:14:36.349036 kubelet[2391]: I0913 10:14:36.348984 2391 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 10:14:36.349036 kubelet[2391]: I0913 10:14:36.349019 2391 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:14:36.362043 kubelet[2391]: E0913 10:14:36.361993 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 10:14:36.365535 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 10:14:36.381259 kubelet[2391]: E0913 10:14:36.381215 2391 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 10:14:36.382428 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 10:14:36.386004 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 10:14:36.404573 kubelet[2391]: E0913 10:14:36.404543 2391 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 10:14:36.405129 kubelet[2391]: I0913 10:14:36.405099 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:14:36.405180 kubelet[2391]: I0913 10:14:36.405120 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:14:36.406031 kubelet[2391]: I0913 10:14:36.405999 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:14:36.408150 kubelet[2391]: E0913 10:14:36.408123 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 10:14:36.408524 kubelet[2391]: E0913 10:14:36.408167 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 10:14:36.506748 kubelet[2391]: I0913 10:14:36.506721 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:14:36.507184 kubelet[2391]: E0913 10:14:36.507144 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Sep 13 10:14:36.549034 kubelet[2391]: E0913 10:14:36.548990 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 13 10:14:36.609620 kubelet[2391]: E0913 10:14:36.609477 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.19:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 13 10:14:36.640269 kubelet[2391]: E0913 10:14:36.640213 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 13 10:14:36.682694 kubelet[2391]: E0913 10:14:36.682642 2391 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 13 10:14:36.709382 kubelet[2391]: I0913 10:14:36.709340 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:14:36.709768 kubelet[2391]: E0913 10:14:36.709726 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Sep 13 10:14:37.063233 kubelet[2391]: E0913 10:14:37.063164 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.19:6443: connect: connection refused" interval="1.6s" Sep 13 10:14:37.111975 kubelet[2391]: I0913 10:14:37.111873 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:14:37.112388 kubelet[2391]: E0913 10:14:37.112312 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Sep 13 10:14:37.194071 systemd[1]: Created slice kubepods-burstable-podca278cb862bd2c05db5485a3a564d2ed.slice - libcontainer container kubepods-burstable-podca278cb862bd2c05db5485a3a564d2ed.slice. Sep 13 10:14:37.209340 kubelet[2391]: E0913 10:14:37.209309 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:14:37.213467 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 13 10:14:37.215245 kubelet[2391]: E0913 10:14:37.215214 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:14:37.217176 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 13 10:14:37.218917 kubelet[2391]: E0913 10:14:37.218900 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:14:37.267445 kubelet[2391]: I0913 10:14:37.267402 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:37.267445 kubelet[2391]: I0913 10:14:37.267440 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:37.267650 kubelet[2391]: I0913 10:14:37.267465 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:14:37.267650 kubelet[2391]: I0913 10:14:37.267479 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca278cb862bd2c05db5485a3a564d2ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca278cb862bd2c05db5485a3a564d2ed\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:37.267650 kubelet[2391]: I0913 10:14:37.267564 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca278cb862bd2c05db5485a3a564d2ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca278cb862bd2c05db5485a3a564d2ed\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:37.267650 kubelet[2391]: I0913 10:14:37.267610 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca278cb862bd2c05db5485a3a564d2ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ca278cb862bd2c05db5485a3a564d2ed\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:37.267650 kubelet[2391]: I0913 10:14:37.267642 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:37.267774 kubelet[2391]: I0913 10:14:37.267665 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:37.267774 kubelet[2391]: I0913 10:14:37.267698 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:37.510327 kubelet[2391]: E0913 10:14:37.510182 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:37.511143 containerd[1560]: time="2025-09-13T10:14:37.511096296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ca278cb862bd2c05db5485a3a564d2ed,Namespace:kube-system,Attempt:0,}" Sep 13 10:14:37.516258 kubelet[2391]: E0913 10:14:37.516224 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:37.516664 containerd[1560]: time="2025-09-13T10:14:37.516619755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 13 10:14:37.520029 kubelet[2391]: E0913 10:14:37.520006 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:37.520340 containerd[1560]: time="2025-09-13T10:14:37.520311011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 13 10:14:37.553536 containerd[1560]: time="2025-09-13T10:14:37.553253306Z" level=info msg="connecting to shim d5ccb31a204a138762e8b7749fe9b50405b9ed8d51ff9b94506ca0e0bcc38b77" address="unix:///run/containerd/s/4f52d8e31afc277d6a68a96f118bb37488e18daacc3c8e61f8a86b8506ca2247" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:14:37.563664 containerd[1560]: time="2025-09-13T10:14:37.563619970Z" level=info msg="connecting to shim 17e0567c3a79aa5562cf965f4691a771a7aa517121d8c5a878080b9d051dc32d" address="unix:///run/containerd/s/2e759fa80191169faf7e167885bab5b623fc690639a093e11069ddb0090dee4b" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:14:37.575000 containerd[1560]: time="2025-09-13T10:14:37.574936270Z" level=info msg="connecting to shim 04ebb291a1f638a95b90f40990ee9b237bcd85496a1c9ad17c798f1ea4b0283d" address="unix:///run/containerd/s/5e5f835c62a11ffbf421a8b6ee7476921c1bdb8d0be0aff6dc963f5811b555d9" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:14:37.595824 systemd[1]: Started cri-containerd-d5ccb31a204a138762e8b7749fe9b50405b9ed8d51ff9b94506ca0e0bcc38b77.scope - libcontainer container d5ccb31a204a138762e8b7749fe9b50405b9ed8d51ff9b94506ca0e0bcc38b77. Sep 13 10:14:37.603881 systemd[1]: Started cri-containerd-17e0567c3a79aa5562cf965f4691a771a7aa517121d8c5a878080b9d051dc32d.scope - libcontainer container 17e0567c3a79aa5562cf965f4691a771a7aa517121d8c5a878080b9d051dc32d. Sep 13 10:14:37.613603 kubelet[2391]: E0913 10:14:37.613562 2391 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.19:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 13 10:14:37.650697 systemd[1]: Started cri-containerd-04ebb291a1f638a95b90f40990ee9b237bcd85496a1c9ad17c798f1ea4b0283d.scope - libcontainer container 04ebb291a1f638a95b90f40990ee9b237bcd85496a1c9ad17c798f1ea4b0283d. Sep 13 10:14:37.712537 containerd[1560]: time="2025-09-13T10:14:37.712194459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ca278cb862bd2c05db5485a3a564d2ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5ccb31a204a138762e8b7749fe9b50405b9ed8d51ff9b94506ca0e0bcc38b77\"" Sep 13 10:14:37.714546 kubelet[2391]: E0913 10:14:37.714493 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:37.719662 containerd[1560]: time="2025-09-13T10:14:37.719613250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"17e0567c3a79aa5562cf965f4691a771a7aa517121d8c5a878080b9d051dc32d\"" Sep 13 10:14:37.720322 kubelet[2391]: E0913 10:14:37.720287 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:37.721443 containerd[1560]: time="2025-09-13T10:14:37.721394537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"04ebb291a1f638a95b90f40990ee9b237bcd85496a1c9ad17c798f1ea4b0283d\"" Sep 13 10:14:37.722237 kubelet[2391]: E0913 10:14:37.722216 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:37.722757 containerd[1560]: time="2025-09-13T10:14:37.722700699Z" level=info msg="CreateContainer within sandbox \"d5ccb31a204a138762e8b7749fe9b50405b9ed8d51ff9b94506ca0e0bcc38b77\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 10:14:37.725270 containerd[1560]: time="2025-09-13T10:14:37.725222523Z" level=info msg="CreateContainer within sandbox \"17e0567c3a79aa5562cf965f4691a771a7aa517121d8c5a878080b9d051dc32d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 10:14:37.728156 containerd[1560]: time="2025-09-13T10:14:37.728118998Z" level=info msg="CreateContainer within sandbox \"04ebb291a1f638a95b90f40990ee9b237bcd85496a1c9ad17c798f1ea4b0283d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 10:14:37.737836 containerd[1560]: time="2025-09-13T10:14:37.737807914Z" level=info msg="Container 4121cefc44d29069bb055ea7a0fff1674f515bb972b90ecb55903d895a010929: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:14:37.743768 containerd[1560]: time="2025-09-13T10:14:37.743743977Z" level=info msg="Container 948131e1f5e671d9252280ad1bde0fff6da3bff00d0c6576cb602423aab0279e: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:14:37.746333 containerd[1560]: time="2025-09-13T10:14:37.746257815Z" level=info msg="Container e230d37744b262b172e7c857596b161e943e28b328e800ea97183823ba71251d: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:14:37.750157 containerd[1560]: time="2025-09-13T10:14:37.750116139Z" level=info msg="CreateContainer within sandbox \"d5ccb31a204a138762e8b7749fe9b50405b9ed8d51ff9b94506ca0e0bcc38b77\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4121cefc44d29069bb055ea7a0fff1674f515bb972b90ecb55903d895a010929\"" Sep 13 10:14:37.750897 containerd[1560]: time="2025-09-13T10:14:37.750871074Z" level=info msg="StartContainer for \"4121cefc44d29069bb055ea7a0fff1674f515bb972b90ecb55903d895a010929\"" Sep 13 10:14:37.751979 containerd[1560]: time="2025-09-13T10:14:37.751957509Z" level=info msg="connecting to shim 4121cefc44d29069bb055ea7a0fff1674f515bb972b90ecb55903d895a010929" address="unix:///run/containerd/s/4f52d8e31afc277d6a68a96f118bb37488e18daacc3c8e61f8a86b8506ca2247" protocol=ttrpc version=3 Sep 13 10:14:37.755266 containerd[1560]: time="2025-09-13T10:14:37.755224378Z" level=info msg="CreateContainer within sandbox \"17e0567c3a79aa5562cf965f4691a771a7aa517121d8c5a878080b9d051dc32d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"948131e1f5e671d9252280ad1bde0fff6da3bff00d0c6576cb602423aab0279e\"" Sep 13 10:14:37.755844 containerd[1560]: time="2025-09-13T10:14:37.755816774Z" level=info msg="StartContainer for \"948131e1f5e671d9252280ad1bde0fff6da3bff00d0c6576cb602423aab0279e\"" Sep 13 10:14:37.757359 containerd[1560]: time="2025-09-13T10:14:37.757328658Z" level=info msg="connecting to shim 948131e1f5e671d9252280ad1bde0fff6da3bff00d0c6576cb602423aab0279e" address="unix:///run/containerd/s/2e759fa80191169faf7e167885bab5b623fc690639a093e11069ddb0090dee4b" protocol=ttrpc version=3 Sep 13 10:14:37.759820 containerd[1560]: time="2025-09-13T10:14:37.759775348Z" level=info msg="CreateContainer within sandbox \"04ebb291a1f638a95b90f40990ee9b237bcd85496a1c9ad17c798f1ea4b0283d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e230d37744b262b172e7c857596b161e943e28b328e800ea97183823ba71251d\"" Sep 13 10:14:37.761274 containerd[1560]: time="2025-09-13T10:14:37.760548658Z" level=info msg="StartContainer for \"e230d37744b262b172e7c857596b161e943e28b328e800ea97183823ba71251d\"" Sep 13 10:14:37.764452 containerd[1560]: time="2025-09-13T10:14:37.764420797Z" level=info msg="connecting to shim e230d37744b262b172e7c857596b161e943e28b328e800ea97183823ba71251d" address="unix:///run/containerd/s/5e5f835c62a11ffbf421a8b6ee7476921c1bdb8d0be0aff6dc963f5811b555d9" protocol=ttrpc version=3 Sep 13 10:14:37.780720 systemd[1]: Started cri-containerd-4121cefc44d29069bb055ea7a0fff1674f515bb972b90ecb55903d895a010929.scope - libcontainer container 4121cefc44d29069bb055ea7a0fff1674f515bb972b90ecb55903d895a010929. Sep 13 10:14:37.785604 systemd[1]: Started cri-containerd-948131e1f5e671d9252280ad1bde0fff6da3bff00d0c6576cb602423aab0279e.scope - libcontainer container 948131e1f5e671d9252280ad1bde0fff6da3bff00d0c6576cb602423aab0279e. Sep 13 10:14:37.787616 systemd[1]: Started cri-containerd-e230d37744b262b172e7c857596b161e943e28b328e800ea97183823ba71251d.scope - libcontainer container e230d37744b262b172e7c857596b161e943e28b328e800ea97183823ba71251d. Sep 13 10:14:37.851552 containerd[1560]: time="2025-09-13T10:14:37.851398168Z" level=info msg="StartContainer for \"948131e1f5e671d9252280ad1bde0fff6da3bff00d0c6576cb602423aab0279e\" returns successfully" Sep 13 10:14:37.857369 containerd[1560]: time="2025-09-13T10:14:37.857290308Z" level=info msg="StartContainer for \"4121cefc44d29069bb055ea7a0fff1674f515bb972b90ecb55903d895a010929\" returns successfully" Sep 13 10:14:37.872265 containerd[1560]: time="2025-09-13T10:14:37.872119735Z" level=info msg="StartContainer for \"e230d37744b262b172e7c857596b161e943e28b328e800ea97183823ba71251d\" returns successfully" Sep 13 10:14:37.914542 kubelet[2391]: I0913 10:14:37.914425 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:14:37.915138 kubelet[2391]: E0913 10:14:37.915103 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.19:6443/api/v1/nodes\": dial tcp 10.0.0.19:6443: connect: connection refused" node="localhost" Sep 13 10:14:38.739011 kubelet[2391]: E0913 10:14:38.738942 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:14:38.739753 kubelet[2391]: E0913 10:14:38.739730 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:38.746531 kubelet[2391]: E0913 10:14:38.746242 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:14:38.746531 kubelet[2391]: E0913 10:14:38.746362 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:38.747299 kubelet[2391]: E0913 10:14:38.747202 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:14:38.747360 kubelet[2391]: E0913 10:14:38.747349 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:39.539000 kubelet[2391]: I0913 10:14:39.538937 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:14:39.734659 kubelet[2391]: E0913 10:14:39.734586 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 10:14:39.748819 kubelet[2391]: E0913 10:14:39.748581 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:14:39.748819 kubelet[2391]: E0913 10:14:39.748734 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:39.752125 kubelet[2391]: E0913 10:14:39.752042 2391 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 10:14:39.752239 kubelet[2391]: E0913 10:14:39.752192 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:39.816524 kubelet[2391]: I0913 10:14:39.816390 2391 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 10:14:39.889748 kubelet[2391]: I0913 10:14:39.889694 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:39.899861 kubelet[2391]: E0913 10:14:39.899820 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:39.900216 kubelet[2391]: I0913 10:14:39.900053 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:39.902659 kubelet[2391]: E0913 10:14:39.902608 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:39.902659 kubelet[2391]: I0913 10:14:39.902644 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:14:39.904584 kubelet[2391]: E0913 10:14:39.904545 2391 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 10:14:40.177796 update_engine[1546]: I20250913 10:14:40.177593 1546 update_attempter.cc:509] Updating boot flags... Sep 13 10:14:40.626612 kubelet[2391]: I0913 10:14:40.626570 2391 apiserver.go:52] "Watching apiserver" Sep 13 10:14:40.658023 kubelet[2391]: I0913 10:14:40.657889 2391 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 10:14:40.748316 kubelet[2391]: I0913 10:14:40.748284 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:40.754913 kubelet[2391]: E0913 10:14:40.754863 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:40.858946 kubelet[2391]: I0913 10:14:40.858897 2391 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:40.864177 kubelet[2391]: E0913 10:14:40.864132 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:41.749690 kubelet[2391]: E0913 10:14:41.749595 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:41.749931 kubelet[2391]: E0913 10:14:41.749898 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:42.015964 systemd[1]: Reload requested from client PID 2695 ('systemctl') (unit session-9.scope)... Sep 13 10:14:42.015986 systemd[1]: Reloading... Sep 13 10:14:42.096550 zram_generator::config[2737]: No configuration found. Sep 13 10:14:42.345021 systemd[1]: Reloading finished in 328 ms. Sep 13 10:14:42.377987 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:42.397187 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 10:14:42.397585 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:42.397663 systemd[1]: kubelet.service: Consumed 1.157s CPU time, 134.4M memory peak. Sep 13 10:14:42.399943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 10:14:42.637223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 10:14:42.655056 (kubelet)[2783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 10:14:42.698249 kubelet[2783]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:14:42.698249 kubelet[2783]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 10:14:42.698249 kubelet[2783]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 10:14:42.698703 kubelet[2783]: I0913 10:14:42.698297 2783 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 10:14:42.704906 kubelet[2783]: I0913 10:14:42.704862 2783 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 13 10:14:42.704906 kubelet[2783]: I0913 10:14:42.704894 2783 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 10:14:42.705152 kubelet[2783]: I0913 10:14:42.705129 2783 server.go:956] "Client rotation is on, will bootstrap in background" Sep 13 10:14:42.707456 kubelet[2783]: I0913 10:14:42.706984 2783 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 13 10:14:42.711415 kubelet[2783]: I0913 10:14:42.711367 2783 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 10:14:42.716266 kubelet[2783]: I0913 10:14:42.716216 2783 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 10:14:42.722800 kubelet[2783]: I0913 10:14:42.722741 2783 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 10:14:42.723123 kubelet[2783]: I0913 10:14:42.723089 2783 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 10:14:42.723306 kubelet[2783]: I0913 10:14:42.723122 2783 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 10:14:42.723392 kubelet[2783]: I0913 10:14:42.723314 2783 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 10:14:42.723392 kubelet[2783]: I0913 10:14:42.723326 2783 container_manager_linux.go:303] "Creating device plugin manager" Sep 13 10:14:42.723392 kubelet[2783]: I0913 10:14:42.723392 2783 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:14:42.723707 kubelet[2783]: I0913 10:14:42.723688 2783 kubelet.go:480] "Attempting to sync node with API server" Sep 13 10:14:42.723758 kubelet[2783]: I0913 10:14:42.723711 2783 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 10:14:42.723758 kubelet[2783]: I0913 10:14:42.723741 2783 kubelet.go:386] "Adding apiserver pod source" Sep 13 10:14:42.723810 kubelet[2783]: I0913 10:14:42.723761 2783 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 10:14:42.724986 kubelet[2783]: I0913 10:14:42.724947 2783 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 13 10:14:42.725540 kubelet[2783]: I0913 10:14:42.725493 2783 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 13 10:14:42.734038 kubelet[2783]: I0913 10:14:42.732840 2783 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 10:14:42.734038 kubelet[2783]: I0913 10:14:42.732906 2783 server.go:1289] "Started kubelet" Sep 13 10:14:42.734038 kubelet[2783]: I0913 10:14:42.733352 2783 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 10:14:42.734038 kubelet[2783]: I0913 10:14:42.733541 2783 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 10:14:42.734038 kubelet[2783]: I0913 10:14:42.733877 2783 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 10:14:42.739269 kubelet[2783]: I0913 10:14:42.739233 2783 server.go:317] "Adding debug handlers to kubelet server" Sep 13 10:14:42.741960 kubelet[2783]: I0913 10:14:42.741943 2783 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 10:14:42.742519 kubelet[2783]: I0913 10:14:42.742474 2783 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 10:14:42.742859 kubelet[2783]: I0913 10:14:42.742791 2783 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 10:14:42.745462 kubelet[2783]: I0913 10:14:42.745439 2783 factory.go:223] Registration of the systemd container factory successfully Sep 13 10:14:42.745689 kubelet[2783]: I0913 10:14:42.745670 2783 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 10:14:42.745817 kubelet[2783]: I0913 10:14:42.745802 2783 reconciler.go:26] "Reconciler: start to sync state" Sep 13 10:14:42.746184 kubelet[2783]: I0913 10:14:42.746147 2783 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 10:14:42.747163 kubelet[2783]: E0913 10:14:42.747093 2783 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 10:14:42.749767 kubelet[2783]: I0913 10:14:42.749689 2783 factory.go:223] Registration of the containerd container factory successfully Sep 13 10:14:42.764404 kubelet[2783]: I0913 10:14:42.764129 2783 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 13 10:14:42.767535 kubelet[2783]: I0913 10:14:42.767206 2783 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 13 10:14:42.767535 kubelet[2783]: I0913 10:14:42.767247 2783 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 13 10:14:42.767535 kubelet[2783]: I0913 10:14:42.767273 2783 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 10:14:42.767535 kubelet[2783]: I0913 10:14:42.767283 2783 kubelet.go:2436] "Starting kubelet main sync loop" Sep 13 10:14:42.767535 kubelet[2783]: E0913 10:14:42.767335 2783 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 10:14:42.803300 kubelet[2783]: I0913 10:14:42.803259 2783 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 10:14:42.803300 kubelet[2783]: I0913 10:14:42.803286 2783 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 10:14:42.803300 kubelet[2783]: I0913 10:14:42.803315 2783 state_mem.go:36] "Initialized new in-memory state store" Sep 13 10:14:42.803642 kubelet[2783]: I0913 10:14:42.803517 2783 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 10:14:42.803642 kubelet[2783]: I0913 10:14:42.803528 2783 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 10:14:42.803642 kubelet[2783]: I0913 10:14:42.803546 2783 policy_none.go:49] "None policy: Start" Sep 13 10:14:42.803642 kubelet[2783]: I0913 10:14:42.803556 2783 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 10:14:42.803642 kubelet[2783]: I0913 10:14:42.803566 2783 state_mem.go:35] "Initializing new in-memory state store" Sep 13 10:14:42.803813 kubelet[2783]: I0913 10:14:42.803664 2783 state_mem.go:75] "Updated machine memory state" Sep 13 10:14:42.810113 kubelet[2783]: E0913 10:14:42.810080 2783 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 13 10:14:42.810322 kubelet[2783]: I0913 10:14:42.810300 2783 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 10:14:42.810391 kubelet[2783]: I0913 10:14:42.810318 2783 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 10:14:42.810617 kubelet[2783]: I0913 10:14:42.810581 2783 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 10:14:42.811953 kubelet[2783]: E0913 10:14:42.811914 2783 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 10:14:42.868685 kubelet[2783]: I0913 10:14:42.868650 2783 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:42.868685 kubelet[2783]: I0913 10:14:42.868690 2783 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 10:14:42.868909 kubelet[2783]: I0913 10:14:42.868647 2783 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:42.918174 kubelet[2783]: I0913 10:14:42.918054 2783 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 10:14:42.947700 kubelet[2783]: I0913 10:14:42.947643 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:42.947700 kubelet[2783]: I0913 10:14:42.947682 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:42.947700 kubelet[2783]: I0913 10:14:42.947708 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:42.947950 kubelet[2783]: I0913 10:14:42.947729 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:42.947950 kubelet[2783]: I0913 10:14:42.947747 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:42.947950 kubelet[2783]: I0913 10:14:42.947790 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca278cb862bd2c05db5485a3a564d2ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca278cb862bd2c05db5485a3a564d2ed\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:42.947950 kubelet[2783]: I0913 10:14:42.947821 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca278cb862bd2c05db5485a3a564d2ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ca278cb862bd2c05db5485a3a564d2ed\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:42.947950 kubelet[2783]: I0913 10:14:42.947841 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca278cb862bd2c05db5485a3a564d2ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ca278cb862bd2c05db5485a3a564d2ed\") " pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:42.948067 kubelet[2783]: I0913 10:14:42.947866 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 13 10:14:43.208945 kubelet[2783]: E0913 10:14:43.208172 2783 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:43.208945 kubelet[2783]: E0913 10:14:43.208201 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:43.208945 kubelet[2783]: E0913 10:14:43.208818 2783 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:43.209147 kubelet[2783]: E0913 10:14:43.208959 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:43.209380 kubelet[2783]: E0913 10:14:43.209302 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:43.239647 kubelet[2783]: I0913 10:14:43.239587 2783 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 10:14:43.239788 kubelet[2783]: I0913 10:14:43.239727 2783 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 10:14:43.290008 sudo[2823]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 10:14:43.290438 sudo[2823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 10:14:43.620675 sudo[2823]: pam_unix(sudo:session): session closed for user root Sep 13 10:14:43.724409 kubelet[2783]: I0913 10:14:43.724367 2783 apiserver.go:52] "Watching apiserver" Sep 13 10:14:43.746009 kubelet[2783]: I0913 10:14:43.745974 2783 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 10:14:43.784896 kubelet[2783]: I0913 10:14:43.784858 2783 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:43.785025 kubelet[2783]: I0913 10:14:43.785008 2783 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:43.785427 kubelet[2783]: E0913 10:14:43.785391 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:43.798911 kubelet[2783]: E0913 10:14:43.798868 2783 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 10:14:43.799201 kubelet[2783]: E0913 10:14:43.798918 2783 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 10:14:43.799447 kubelet[2783]: E0913 10:14:43.799400 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:43.799447 kubelet[2783]: E0913 10:14:43.799438 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:43.825557 kubelet[2783]: I0913 10:14:43.825433 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.8253821439999998 podStartE2EDuration="3.825382144s" podCreationTimestamp="2025-09-13 10:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:14:43.815828953 +0000 UTC m=+1.156078685" watchObservedRunningTime="2025-09-13 10:14:43.825382144 +0000 UTC m=+1.165631866" Sep 13 10:14:43.825926 kubelet[2783]: I0913 10:14:43.825612 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.825606999 podStartE2EDuration="3.825606999s" podCreationTimestamp="2025-09-13 10:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:14:43.824864213 +0000 UTC m=+1.165113935" watchObservedRunningTime="2025-09-13 10:14:43.825606999 +0000 UTC m=+1.165856721" Sep 13 10:14:43.846842 kubelet[2783]: I0913 10:14:43.846765 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.846745999 podStartE2EDuration="1.846745999s" podCreationTimestamp="2025-09-13 10:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:14:43.836280732 +0000 UTC m=+1.176530474" watchObservedRunningTime="2025-09-13 10:14:43.846745999 +0000 UTC m=+1.186995722" Sep 13 10:14:44.787190 kubelet[2783]: E0913 10:14:44.787131 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:44.788496 kubelet[2783]: E0913 10:14:44.787983 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:44.788496 kubelet[2783]: E0913 10:14:44.788267 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:45.163764 sudo[1793]: pam_unix(sudo:session): session closed for user root Sep 13 10:14:45.165294 sshd[1792]: Connection closed by 10.0.0.1 port 51806 Sep 13 10:14:45.165737 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Sep 13 10:14:45.170065 systemd[1]: sshd@8-10.0.0.19:22-10.0.0.1:51806.service: Deactivated successfully. Sep 13 10:14:45.172694 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 10:14:45.172953 systemd[1]: session-9.scope: Consumed 5.162s CPU time, 261.7M memory peak. Sep 13 10:14:45.176483 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Sep 13 10:14:45.177675 systemd-logind[1542]: Removed session 9. Sep 13 10:14:45.788650 kubelet[2783]: E0913 10:14:45.788606 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:46.596161 kubelet[2783]: E0913 10:14:46.596115 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:47.697880 kubelet[2783]: I0913 10:14:47.697837 2783 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 10:14:47.698324 containerd[1560]: time="2025-09-13T10:14:47.698271081Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 10:14:47.698611 kubelet[2783]: I0913 10:14:47.698586 2783 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 10:14:48.551547 systemd[1]: Created slice kubepods-besteffort-podd079e21a_e286_44a5_a3e6_cc763dfe9e18.slice - libcontainer container kubepods-besteffort-podd079e21a_e286_44a5_a3e6_cc763dfe9e18.slice. Sep 13 10:14:48.567035 systemd[1]: Created slice kubepods-burstable-podb7bbb2c5_a755_4dd6_849d_87ed55f753a2.slice - libcontainer container kubepods-burstable-podb7bbb2c5_a755_4dd6_849d_87ed55f753a2.slice. Sep 13 10:14:48.584445 kubelet[2783]: I0913 10:14:48.584401 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-hostproc\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584445 kubelet[2783]: I0913 10:14:48.584438 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cni-path\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584445 kubelet[2783]: I0913 10:14:48.584455 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-host-proc-sys-net\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584445 kubelet[2783]: I0913 10:14:48.584470 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-hubble-tls\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584717 kubelet[2783]: I0913 10:14:48.584485 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-lib-modules\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584717 kubelet[2783]: I0913 10:14:48.584525 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-xtables-lock\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584717 kubelet[2783]: I0913 10:14:48.584548 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d079e21a-e286-44a5-a3e6-cc763dfe9e18-kube-proxy\") pod \"kube-proxy-69gld\" (UID: \"d079e21a-e286-44a5-a3e6-cc763dfe9e18\") " pod="kube-system/kube-proxy-69gld" Sep 13 10:14:48.584717 kubelet[2783]: I0913 10:14:48.584563 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d079e21a-e286-44a5-a3e6-cc763dfe9e18-lib-modules\") pod \"kube-proxy-69gld\" (UID: \"d079e21a-e286-44a5-a3e6-cc763dfe9e18\") " pod="kube-system/kube-proxy-69gld" Sep 13 10:14:48.584717 kubelet[2783]: I0913 10:14:48.584639 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpnqx\" (UniqueName: \"kubernetes.io/projected/d079e21a-e286-44a5-a3e6-cc763dfe9e18-kube-api-access-mpnqx\") pod \"kube-proxy-69gld\" (UID: \"d079e21a-e286-44a5-a3e6-cc763dfe9e18\") " pod="kube-system/kube-proxy-69gld" Sep 13 10:14:48.584832 kubelet[2783]: I0913 10:14:48.584737 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-etc-cni-netd\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584832 kubelet[2783]: I0913 10:14:48.584790 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d079e21a-e286-44a5-a3e6-cc763dfe9e18-xtables-lock\") pod \"kube-proxy-69gld\" (UID: \"d079e21a-e286-44a5-a3e6-cc763dfe9e18\") " pod="kube-system/kube-proxy-69gld" Sep 13 10:14:48.584832 kubelet[2783]: I0913 10:14:48.584810 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-run\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584905 kubelet[2783]: I0913 10:14:48.584830 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-bpf-maps\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584905 kubelet[2783]: I0913 10:14:48.584876 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-cgroup\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584968 kubelet[2783]: I0913 10:14:48.584894 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-clustermesh-secrets\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584968 kubelet[2783]: I0913 10:14:48.584940 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-config-path\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.584968 kubelet[2783]: I0913 10:14:48.584958 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-host-proc-sys-kernel\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.585042 kubelet[2783]: I0913 10:14:48.584977 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djscd\" (UniqueName: \"kubernetes.io/projected/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-kube-api-access-djscd\") pod \"cilium-r8stm\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " pod="kube-system/cilium-r8stm" Sep 13 10:14:48.763577 systemd[1]: Created slice kubepods-besteffort-pod61458706_555f_4e09_a660_0d5320dabd20.slice - libcontainer container kubepods-besteffort-pod61458706_555f_4e09_a660_0d5320dabd20.slice. Sep 13 10:14:48.787246 kubelet[2783]: I0913 10:14:48.787183 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61458706-555f-4e09-a660-0d5320dabd20-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lzl4l\" (UID: \"61458706-555f-4e09-a660-0d5320dabd20\") " pod="kube-system/cilium-operator-6c4d7847fc-lzl4l" Sep 13 10:14:48.787246 kubelet[2783]: I0913 10:14:48.787220 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z9pf\" (UniqueName: \"kubernetes.io/projected/61458706-555f-4e09-a660-0d5320dabd20-kube-api-access-6z9pf\") pod \"cilium-operator-6c4d7847fc-lzl4l\" (UID: \"61458706-555f-4e09-a660-0d5320dabd20\") " pod="kube-system/cilium-operator-6c4d7847fc-lzl4l" Sep 13 10:14:48.861431 kubelet[2783]: E0913 10:14:48.860814 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:48.861562 containerd[1560]: time="2025-09-13T10:14:48.861446652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-69gld,Uid:d079e21a-e286-44a5-a3e6-cc763dfe9e18,Namespace:kube-system,Attempt:0,}" Sep 13 10:14:48.871018 kubelet[2783]: E0913 10:14:48.870983 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:48.871595 containerd[1560]: time="2025-09-13T10:14:48.871554605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8stm,Uid:b7bbb2c5-a755-4dd6-849d-87ed55f753a2,Namespace:kube-system,Attempt:0,}" Sep 13 10:14:48.906607 containerd[1560]: time="2025-09-13T10:14:48.906552221Z" level=info msg="connecting to shim 1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b" address="unix:///run/containerd/s/bdd36644d0bd8616c0ddfd7cc0730efba8e035725af83264c289bb23902e2a9e" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:14:48.910360 containerd[1560]: time="2025-09-13T10:14:48.910312471Z" level=info msg="connecting to shim db20f802c48d3d7d15ef3f646ba88cf4bf6026f7dffd511cd4f4135874506ac3" address="unix:///run/containerd/s/103dca3845f9587ec2ed2b6f543a9c9d2b8636f18ad82de2131d23bfd62a2ba3" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:14:48.973794 systemd[1]: Started cri-containerd-db20f802c48d3d7d15ef3f646ba88cf4bf6026f7dffd511cd4f4135874506ac3.scope - libcontainer container db20f802c48d3d7d15ef3f646ba88cf4bf6026f7dffd511cd4f4135874506ac3. Sep 13 10:14:48.978439 systemd[1]: Started cri-containerd-1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b.scope - libcontainer container 1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b. Sep 13 10:14:49.009853 containerd[1560]: time="2025-09-13T10:14:49.009787054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-69gld,Uid:d079e21a-e286-44a5-a3e6-cc763dfe9e18,Namespace:kube-system,Attempt:0,} returns sandbox id \"db20f802c48d3d7d15ef3f646ba88cf4bf6026f7dffd511cd4f4135874506ac3\"" Sep 13 10:14:49.010629 kubelet[2783]: E0913 10:14:49.010595 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:49.021519 containerd[1560]: time="2025-09-13T10:14:49.021448472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8stm,Uid:b7bbb2c5-a755-4dd6-849d-87ed55f753a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\"" Sep 13 10:14:49.024096 containerd[1560]: time="2025-09-13T10:14:49.024039171Z" level=info msg="CreateContainer within sandbox \"db20f802c48d3d7d15ef3f646ba88cf4bf6026f7dffd511cd4f4135874506ac3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 10:14:49.024279 kubelet[2783]: E0913 10:14:49.024258 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:49.025725 containerd[1560]: time="2025-09-13T10:14:49.025693473Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 10:14:49.047754 containerd[1560]: time="2025-09-13T10:14:49.047679065Z" level=info msg="Container 1884de281764c44f98ff10fbc05ba07b75b9eb8e181c6bf5c64dbcfb539dc8cc: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:14:49.061931 containerd[1560]: time="2025-09-13T10:14:49.061867342Z" level=info msg="CreateContainer within sandbox \"db20f802c48d3d7d15ef3f646ba88cf4bf6026f7dffd511cd4f4135874506ac3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1884de281764c44f98ff10fbc05ba07b75b9eb8e181c6bf5c64dbcfb539dc8cc\"" Sep 13 10:14:49.062827 containerd[1560]: time="2025-09-13T10:14:49.062757612Z" level=info msg="StartContainer for \"1884de281764c44f98ff10fbc05ba07b75b9eb8e181c6bf5c64dbcfb539dc8cc\"" Sep 13 10:14:49.064467 containerd[1560]: time="2025-09-13T10:14:49.064435098Z" level=info msg="connecting to shim 1884de281764c44f98ff10fbc05ba07b75b9eb8e181c6bf5c64dbcfb539dc8cc" address="unix:///run/containerd/s/103dca3845f9587ec2ed2b6f543a9c9d2b8636f18ad82de2131d23bfd62a2ba3" protocol=ttrpc version=3 Sep 13 10:14:49.067384 kubelet[2783]: E0913 10:14:49.067339 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:49.067937 containerd[1560]: time="2025-09-13T10:14:49.067913042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lzl4l,Uid:61458706-555f-4e09-a660-0d5320dabd20,Namespace:kube-system,Attempt:0,}" Sep 13 10:14:49.090753 systemd[1]: Started cri-containerd-1884de281764c44f98ff10fbc05ba07b75b9eb8e181c6bf5c64dbcfb539dc8cc.scope - libcontainer container 1884de281764c44f98ff10fbc05ba07b75b9eb8e181c6bf5c64dbcfb539dc8cc. Sep 13 10:14:49.091099 containerd[1560]: time="2025-09-13T10:14:49.090858295Z" level=info msg="connecting to shim 31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2" address="unix:///run/containerd/s/8124f637505d790b557f7fbf343a305d0d699d0d4be6b8816e18bd84b25e60ee" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:14:49.116899 systemd[1]: Started cri-containerd-31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2.scope - libcontainer container 31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2. Sep 13 10:14:49.164562 containerd[1560]: time="2025-09-13T10:14:49.163695336Z" level=info msg="StartContainer for \"1884de281764c44f98ff10fbc05ba07b75b9eb8e181c6bf5c64dbcfb539dc8cc\" returns successfully" Sep 13 10:14:49.169478 containerd[1560]: time="2025-09-13T10:14:49.169403279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lzl4l,Uid:61458706-555f-4e09-a660-0d5320dabd20,Namespace:kube-system,Attempt:0,} returns sandbox id \"31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2\"" Sep 13 10:14:49.170452 kubelet[2783]: E0913 10:14:49.170423 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:49.798052 kubelet[2783]: E0913 10:14:49.798015 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:50.359339 kubelet[2783]: E0913 10:14:50.359293 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:50.373107 kubelet[2783]: I0913 10:14:50.372681 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-69gld" podStartSLOduration=2.372659201 podStartE2EDuration="2.372659201s" podCreationTimestamp="2025-09-13 10:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:14:49.939432893 +0000 UTC m=+7.279682625" watchObservedRunningTime="2025-09-13 10:14:50.372659201 +0000 UTC m=+7.712908923" Sep 13 10:14:50.800140 kubelet[2783]: E0913 10:14:50.799980 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:50.853755 kubelet[2783]: E0913 10:14:50.853720 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:51.801478 kubelet[2783]: E0913 10:14:51.801439 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:51.801992 kubelet[2783]: E0913 10:14:51.801835 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:52.803382 kubelet[2783]: E0913 10:14:52.803325 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:14:56.601624 kubelet[2783]: E0913 10:14:56.601245 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:03.023566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646698661.mount: Deactivated successfully. Sep 13 10:15:07.060205 containerd[1560]: time="2025-09-13T10:15:07.060128922Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:15:07.061224 containerd[1560]: time="2025-09-13T10:15:07.061153808Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 13 10:15:07.062413 containerd[1560]: time="2025-09-13T10:15:07.062358793Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:15:07.063772 containerd[1560]: time="2025-09-13T10:15:07.063730861Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.037996461s" Sep 13 10:15:07.063772 containerd[1560]: time="2025-09-13T10:15:07.063761008Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 10:15:07.065264 containerd[1560]: time="2025-09-13T10:15:07.065176508Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 10:15:07.076627 containerd[1560]: time="2025-09-13T10:15:07.076578216Z" level=info msg="CreateContainer within sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 10:15:07.088733 containerd[1560]: time="2025-09-13T10:15:07.088674448Z" level=info msg="Container 68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:07.093022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337083701.mount: Deactivated successfully. Sep 13 10:15:07.096978 containerd[1560]: time="2025-09-13T10:15:07.096942687Z" level=info msg="CreateContainer within sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\"" Sep 13 10:15:07.097590 containerd[1560]: time="2025-09-13T10:15:07.097539699Z" level=info msg="StartContainer for \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\"" Sep 13 10:15:07.098557 containerd[1560]: time="2025-09-13T10:15:07.098485787Z" level=info msg="connecting to shim 68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738" address="unix:///run/containerd/s/bdd36644d0bd8616c0ddfd7cc0730efba8e035725af83264c289bb23902e2a9e" protocol=ttrpc version=3 Sep 13 10:15:07.127652 systemd[1]: Started cri-containerd-68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738.scope - libcontainer container 68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738. Sep 13 10:15:07.162360 containerd[1560]: time="2025-09-13T10:15:07.162313150Z" level=info msg="StartContainer for \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\" returns successfully" Sep 13 10:15:07.175246 systemd[1]: cri-containerd-68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738.scope: Deactivated successfully. Sep 13 10:15:07.178385 containerd[1560]: time="2025-09-13T10:15:07.178341601Z" level=info msg="received exit event container_id:\"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\" id:\"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\" pid:3217 exited_at:{seconds:1757758507 nanos:177879534}" Sep 13 10:15:07.178707 containerd[1560]: time="2025-09-13T10:15:07.178678886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\" id:\"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\" pid:3217 exited_at:{seconds:1757758507 nanos:177879534}" Sep 13 10:15:07.201346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738-rootfs.mount: Deactivated successfully. Sep 13 10:15:07.836073 kubelet[2783]: E0913 10:15:07.836017 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:07.842276 containerd[1560]: time="2025-09-13T10:15:07.842216548Z" level=info msg="CreateContainer within sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 10:15:07.867985 containerd[1560]: time="2025-09-13T10:15:07.867925974Z" level=info msg="Container 8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:07.888043 containerd[1560]: time="2025-09-13T10:15:07.887933204Z" level=info msg="CreateContainer within sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\"" Sep 13 10:15:07.889042 containerd[1560]: time="2025-09-13T10:15:07.888987525Z" level=info msg="StartContainer for \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\"" Sep 13 10:15:07.890417 containerd[1560]: time="2025-09-13T10:15:07.890354023Z" level=info msg="connecting to shim 8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5" address="unix:///run/containerd/s/bdd36644d0bd8616c0ddfd7cc0730efba8e035725af83264c289bb23902e2a9e" protocol=ttrpc version=3 Sep 13 10:15:07.916756 systemd[1]: Started cri-containerd-8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5.scope - libcontainer container 8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5. Sep 13 10:15:07.956260 containerd[1560]: time="2025-09-13T10:15:07.956192886Z" level=info msg="StartContainer for \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\" returns successfully" Sep 13 10:15:07.974055 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 10:15:07.974802 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:15:07.975183 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:15:07.978211 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 10:15:07.979495 systemd[1]: cri-containerd-8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5.scope: Deactivated successfully. Sep 13 10:15:07.981959 containerd[1560]: time="2025-09-13T10:15:07.981875622Z" level=info msg="received exit event container_id:\"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\" id:\"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\" pid:3263 exited_at:{seconds:1757758507 nanos:981464499}" Sep 13 10:15:07.982748 containerd[1560]: time="2025-09-13T10:15:07.982692167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\" id:\"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\" pid:3263 exited_at:{seconds:1757758507 nanos:981464499}" Sep 13 10:15:08.024789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 10:15:08.839517 kubelet[2783]: E0913 10:15:08.839471 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:08.849885 containerd[1560]: time="2025-09-13T10:15:08.849828926Z" level=info msg="CreateContainer within sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 10:15:08.867166 containerd[1560]: time="2025-09-13T10:15:08.867098226Z" level=info msg="Container 8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:08.880220 containerd[1560]: time="2025-09-13T10:15:08.880153468Z" level=info msg="CreateContainer within sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\"" Sep 13 10:15:08.880857 containerd[1560]: time="2025-09-13T10:15:08.880812677Z" level=info msg="StartContainer for \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\"" Sep 13 10:15:08.882388 containerd[1560]: time="2025-09-13T10:15:08.882364513Z" level=info msg="connecting to shim 8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566" address="unix:///run/containerd/s/bdd36644d0bd8616c0ddfd7cc0730efba8e035725af83264c289bb23902e2a9e" protocol=ttrpc version=3 Sep 13 10:15:08.911776 systemd[1]: Started cri-containerd-8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566.scope - libcontainer container 8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566. Sep 13 10:15:08.956650 systemd[1]: cri-containerd-8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566.scope: Deactivated successfully. Sep 13 10:15:08.957899 containerd[1560]: time="2025-09-13T10:15:08.957862752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\" id:\"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\" pid:3311 exited_at:{seconds:1757758508 nanos:957300315}" Sep 13 10:15:09.187522 containerd[1560]: time="2025-09-13T10:15:09.187370329Z" level=info msg="received exit event container_id:\"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\" id:\"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\" pid:3311 exited_at:{seconds:1757758508 nanos:957300315}" Sep 13 10:15:09.192808 containerd[1560]: time="2025-09-13T10:15:09.192769854Z" level=info msg="StartContainer for \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\" returns successfully" Sep 13 10:15:09.216468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566-rootfs.mount: Deactivated successfully. Sep 13 10:15:09.843103 kubelet[2783]: E0913 10:15:09.843058 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:09.947053 containerd[1560]: time="2025-09-13T10:15:09.946997011Z" level=info msg="CreateContainer within sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 10:15:10.006457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296168338.mount: Deactivated successfully. Sep 13 10:15:10.329274 containerd[1560]: time="2025-09-13T10:15:10.329205373Z" level=info msg="Container 886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:10.333919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4139974764.mount: Deactivated successfully. Sep 13 10:15:10.962998 containerd[1560]: time="2025-09-13T10:15:10.962936955Z" level=info msg="CreateContainer within sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\"" Sep 13 10:15:10.964351 containerd[1560]: time="2025-09-13T10:15:10.963392761Z" level=info msg="StartContainer for \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\"" Sep 13 10:15:10.964351 containerd[1560]: time="2025-09-13T10:15:10.964261514Z" level=info msg="connecting to shim 886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc" address="unix:///run/containerd/s/bdd36644d0bd8616c0ddfd7cc0730efba8e035725af83264c289bb23902e2a9e" protocol=ttrpc version=3 Sep 13 10:15:10.993662 systemd[1]: Started cri-containerd-886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc.scope - libcontainer container 886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc. Sep 13 10:15:11.024518 systemd[1]: cri-containerd-886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc.scope: Deactivated successfully. Sep 13 10:15:11.025928 containerd[1560]: time="2025-09-13T10:15:11.025875303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\" id:\"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\" pid:3357 exited_at:{seconds:1757758511 nanos:25003816}" Sep 13 10:15:11.577961 containerd[1560]: time="2025-09-13T10:15:11.577882400Z" level=info msg="received exit event container_id:\"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\" id:\"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\" pid:3357 exited_at:{seconds:1757758511 nanos:25003816}" Sep 13 10:15:11.579925 containerd[1560]: time="2025-09-13T10:15:11.579739118Z" level=info msg="StartContainer for \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\" returns successfully" Sep 13 10:15:11.600953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc-rootfs.mount: Deactivated successfully. Sep 13 10:15:11.850691 kubelet[2783]: E0913 10:15:11.850547 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:13.126683 kubelet[2783]: E0913 10:15:13.126522 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:13.650297 containerd[1560]: time="2025-09-13T10:15:13.650246646Z" level=info msg="CreateContainer within sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 10:15:14.602240 containerd[1560]: time="2025-09-13T10:15:14.602166014Z" level=info msg="Container dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:14.793113 systemd[1]: Started sshd@9-10.0.0.19:22-10.0.0.1:57794.service - OpenSSH per-connection server daemon (10.0.0.1:57794). Sep 13 10:15:14.858662 containerd[1560]: time="2025-09-13T10:15:14.858080825Z" level=info msg="CreateContainer within sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\"" Sep 13 10:15:14.860226 containerd[1560]: time="2025-09-13T10:15:14.860193713Z" level=info msg="StartContainer for \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\"" Sep 13 10:15:14.862366 containerd[1560]: time="2025-09-13T10:15:14.862324425Z" level=info msg="connecting to shim dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b" address="unix:///run/containerd/s/bdd36644d0bd8616c0ddfd7cc0730efba8e035725af83264c289bb23902e2a9e" protocol=ttrpc version=3 Sep 13 10:15:14.896825 systemd[1]: Started cri-containerd-dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b.scope - libcontainer container dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b. Sep 13 10:15:14.935319 sshd[3386]: Accepted publickey for core from 10.0.0.1 port 57794 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:14.937736 sshd-session[3386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:14.946104 systemd-logind[1542]: New session 10 of user core. Sep 13 10:15:14.951722 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 10:15:15.486621 containerd[1560]: time="2025-09-13T10:15:15.486576768Z" level=info msg="StartContainer for \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" returns successfully" Sep 13 10:15:15.593733 containerd[1560]: time="2025-09-13T10:15:15.593688650Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" id:\"f003b6196ac40cc092d4cdc0ae2dcd4e5ab3d4ff867b6be9ca0d9262ddb0b17c\" pid:3457 exited_at:{seconds:1757758515 nanos:593364581}" Sep 13 10:15:15.611400 sshd[3419]: Connection closed by 10.0.0.1 port 57794 Sep 13 10:15:15.611726 sshd-session[3386]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:15.615744 systemd[1]: sshd@9-10.0.0.19:22-10.0.0.1:57794.service: Deactivated successfully. Sep 13 10:15:15.618018 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 10:15:15.619002 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Sep 13 10:15:15.620613 systemd-logind[1542]: Removed session 10. Sep 13 10:15:15.690708 kubelet[2783]: I0913 10:15:15.687524 2783 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 10:15:16.379867 systemd[1]: Created slice kubepods-burstable-podca9ef4ef_260c_4f06_8856_56f03a32e0ea.slice - libcontainer container kubepods-burstable-podca9ef4ef_260c_4f06_8856_56f03a32e0ea.slice. Sep 13 10:15:16.441888 kubelet[2783]: I0913 10:15:16.441817 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca9ef4ef-260c-4f06-8856-56f03a32e0ea-config-volume\") pod \"coredns-674b8bbfcf-htfg4\" (UID: \"ca9ef4ef-260c-4f06-8856-56f03a32e0ea\") " pod="kube-system/coredns-674b8bbfcf-htfg4" Sep 13 10:15:16.441888 kubelet[2783]: I0913 10:15:16.441874 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv8rm\" (UniqueName: \"kubernetes.io/projected/ca9ef4ef-260c-4f06-8856-56f03a32e0ea-kube-api-access-tv8rm\") pod \"coredns-674b8bbfcf-htfg4\" (UID: \"ca9ef4ef-260c-4f06-8856-56f03a32e0ea\") " pod="kube-system/coredns-674b8bbfcf-htfg4" Sep 13 10:15:16.457355 systemd[1]: Created slice kubepods-burstable-podff1ea3c4_0ba7_48e0_a9d2_0d16c23d749e.slice - libcontainer container kubepods-burstable-podff1ea3c4_0ba7_48e0_a9d2_0d16c23d749e.slice. Sep 13 10:15:16.503336 kubelet[2783]: E0913 10:15:16.503275 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:16.542410 kubelet[2783]: I0913 10:15:16.542355 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvnjd\" (UniqueName: \"kubernetes.io/projected/ff1ea3c4-0ba7-48e0-a9d2-0d16c23d749e-kube-api-access-xvnjd\") pod \"coredns-674b8bbfcf-wjpvh\" (UID: \"ff1ea3c4-0ba7-48e0-a9d2-0d16c23d749e\") " pod="kube-system/coredns-674b8bbfcf-wjpvh" Sep 13 10:15:16.542651 kubelet[2783]: I0913 10:15:16.542461 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff1ea3c4-0ba7-48e0-a9d2-0d16c23d749e-config-volume\") pod \"coredns-674b8bbfcf-wjpvh\" (UID: \"ff1ea3c4-0ba7-48e0-a9d2-0d16c23d749e\") " pod="kube-system/coredns-674b8bbfcf-wjpvh" Sep 13 10:15:16.607606 kubelet[2783]: I0913 10:15:16.607526 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r8stm" podStartSLOduration=10.567450005 podStartE2EDuration="28.607497838s" podCreationTimestamp="2025-09-13 10:14:48 +0000 UTC" firstStartedPulling="2025-09-13 10:14:49.024921666 +0000 UTC m=+6.365171399" lastFinishedPulling="2025-09-13 10:15:07.06496951 +0000 UTC m=+24.405219232" observedRunningTime="2025-09-13 10:15:16.607192864 +0000 UTC m=+33.947442587" watchObservedRunningTime="2025-09-13 10:15:16.607497838 +0000 UTC m=+33.947747560" Sep 13 10:15:16.621578 containerd[1560]: time="2025-09-13T10:15:16.621527160Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:15:16.645579 containerd[1560]: time="2025-09-13T10:15:16.645393686Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 13 10:15:16.650325 containerd[1560]: time="2025-09-13T10:15:16.650266566Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 10:15:16.651826 containerd[1560]: time="2025-09-13T10:15:16.651750813Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 9.586516185s" Sep 13 10:15:16.651826 containerd[1560]: time="2025-09-13T10:15:16.651816496Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 10:15:16.661299 containerd[1560]: time="2025-09-13T10:15:16.661241624Z" level=info msg="CreateContainer within sandbox \"31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 10:15:16.671706 containerd[1560]: time="2025-09-13T10:15:16.671027720Z" level=info msg="Container 6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:16.676055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055639900.mount: Deactivated successfully. Sep 13 10:15:16.680635 containerd[1560]: time="2025-09-13T10:15:16.680595956Z" level=info msg="CreateContainer within sandbox \"31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\"" Sep 13 10:15:16.681193 containerd[1560]: time="2025-09-13T10:15:16.681157751Z" level=info msg="StartContainer for \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\"" Sep 13 10:15:16.682023 containerd[1560]: time="2025-09-13T10:15:16.681988732Z" level=info msg="connecting to shim 6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd" address="unix:///run/containerd/s/8124f637505d790b557f7fbf343a305d0d699d0d4be6b8816e18bd84b25e60ee" protocol=ttrpc version=3 Sep 13 10:15:16.684741 kubelet[2783]: E0913 10:15:16.684709 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:16.690174 containerd[1560]: time="2025-09-13T10:15:16.690127303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-htfg4,Uid:ca9ef4ef-260c-4f06-8856-56f03a32e0ea,Namespace:kube-system,Attempt:0,}" Sep 13 10:15:16.708661 systemd[1]: Started cri-containerd-6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd.scope - libcontainer container 6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd. Sep 13 10:15:16.741141 containerd[1560]: time="2025-09-13T10:15:16.741098754Z" level=info msg="StartContainer for \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" returns successfully" Sep 13 10:15:16.760527 kubelet[2783]: E0913 10:15:16.760370 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:16.761918 containerd[1560]: time="2025-09-13T10:15:16.761869476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wjpvh,Uid:ff1ea3c4-0ba7-48e0-a9d2-0d16c23d749e,Namespace:kube-system,Attempt:0,}" Sep 13 10:15:17.507019 kubelet[2783]: E0913 10:15:17.506971 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:17.507183 kubelet[2783]: E0913 10:15:17.507068 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:18.508497 kubelet[2783]: E0913 10:15:18.508440 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:18.986655 systemd-networkd[1476]: cilium_host: Link UP Sep 13 10:15:18.986977 systemd-networkd[1476]: cilium_net: Link UP Sep 13 10:15:18.987307 systemd-networkd[1476]: cilium_net: Gained carrier Sep 13 10:15:18.987805 systemd-networkd[1476]: cilium_host: Gained carrier Sep 13 10:15:19.106951 systemd-networkd[1476]: cilium_vxlan: Link UP Sep 13 10:15:19.106965 systemd-networkd[1476]: cilium_vxlan: Gained carrier Sep 13 10:15:19.289741 systemd-networkd[1476]: cilium_host: Gained IPv6LL Sep 13 10:15:19.357637 kernel: NET: Registered PF_ALG protocol family Sep 13 10:15:19.953649 systemd-networkd[1476]: cilium_net: Gained IPv6LL Sep 13 10:15:20.104646 systemd-networkd[1476]: lxc_health: Link UP Sep 13 10:15:20.115798 systemd-networkd[1476]: lxc_health: Gained carrier Sep 13 10:15:20.231005 kernel: eth0: renamed from tmp38e0c Sep 13 10:15:20.232176 systemd-networkd[1476]: lxcd9b3d37c0561: Link UP Sep 13 10:15:20.233357 systemd-networkd[1476]: lxcd9b3d37c0561: Gained carrier Sep 13 10:15:20.321775 kernel: eth0: renamed from tmpb60f1 Sep 13 10:15:20.321061 systemd-networkd[1476]: lxc81476d5811c8: Link UP Sep 13 10:15:20.321382 systemd-networkd[1476]: lxc81476d5811c8: Gained carrier Sep 13 10:15:20.627758 systemd[1]: Started sshd@10-10.0.0.19:22-10.0.0.1:46512.service - OpenSSH per-connection server daemon (10.0.0.1:46512). Sep 13 10:15:20.682751 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 46512 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:20.685556 sshd-session[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:20.692321 systemd-logind[1542]: New session 11 of user core. Sep 13 10:15:20.698677 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 10:15:20.786708 systemd-networkd[1476]: cilium_vxlan: Gained IPv6LL Sep 13 10:15:20.887607 kubelet[2783]: E0913 10:15:20.886541 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:20.894978 sshd[3951]: Connection closed by 10.0.0.1 port 46512 Sep 13 10:15:20.895494 sshd-session[3945]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:20.901541 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Sep 13 10:15:20.902382 systemd[1]: sshd@10-10.0.0.19:22-10.0.0.1:46512.service: Deactivated successfully. Sep 13 10:15:20.905224 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 10:15:20.909894 systemd-logind[1542]: Removed session 11. Sep 13 10:15:20.979170 kubelet[2783]: I0913 10:15:20.979082 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lzl4l" podStartSLOduration=5.498351546 podStartE2EDuration="32.979062173s" podCreationTimestamp="2025-09-13 10:14:48 +0000 UTC" firstStartedPulling="2025-09-13 10:14:49.171907885 +0000 UTC m=+6.512157607" lastFinishedPulling="2025-09-13 10:15:16.652618512 +0000 UTC m=+33.992868234" observedRunningTime="2025-09-13 10:15:17.547859908 +0000 UTC m=+34.888109630" watchObservedRunningTime="2025-09-13 10:15:20.979062173 +0000 UTC m=+38.319311895" Sep 13 10:15:21.538736 kubelet[2783]: E0913 10:15:21.538699 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:21.745793 systemd-networkd[1476]: lxc_health: Gained IPv6LL Sep 13 10:15:21.746344 systemd-networkd[1476]: lxc81476d5811c8: Gained IPv6LL Sep 13 10:15:22.257737 systemd-networkd[1476]: lxcd9b3d37c0561: Gained IPv6LL Sep 13 10:15:23.971265 containerd[1560]: time="2025-09-13T10:15:23.970397876Z" level=info msg="connecting to shim b60f1af7a7727d86cc1e47d23e3a8ebab4f972e1af6ef5da8e7eb39a8f1b822f" address="unix:///run/containerd/s/da3ea2658a18af9a17753e3507f9c48fba1d7d57db05611a463a623fc68be72a" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:15:23.971890 containerd[1560]: time="2025-09-13T10:15:23.971861343Z" level=info msg="connecting to shim 38e0c5ae0ee0d7bb180844fe6407ee6facad88658b8c713f3b86d70b24d82e1b" address="unix:///run/containerd/s/e728683b032a6addd65d6a05c5a4c0e6b23fb3027039d3274e33fc74d17fc85f" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:15:24.000675 systemd[1]: Started cri-containerd-38e0c5ae0ee0d7bb180844fe6407ee6facad88658b8c713f3b86d70b24d82e1b.scope - libcontainer container 38e0c5ae0ee0d7bb180844fe6407ee6facad88658b8c713f3b86d70b24d82e1b. Sep 13 10:15:24.004414 systemd[1]: Started cri-containerd-b60f1af7a7727d86cc1e47d23e3a8ebab4f972e1af6ef5da8e7eb39a8f1b822f.scope - libcontainer container b60f1af7a7727d86cc1e47d23e3a8ebab4f972e1af6ef5da8e7eb39a8f1b822f. Sep 13 10:15:24.016912 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:15:24.018385 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 10:15:24.050616 containerd[1560]: time="2025-09-13T10:15:24.050545530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wjpvh,Uid:ff1ea3c4-0ba7-48e0-a9d2-0d16c23d749e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b60f1af7a7727d86cc1e47d23e3a8ebab4f972e1af6ef5da8e7eb39a8f1b822f\"" Sep 13 10:15:24.051497 kubelet[2783]: E0913 10:15:24.051454 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:24.058847 containerd[1560]: time="2025-09-13T10:15:24.058691680Z" level=info msg="CreateContainer within sandbox \"b60f1af7a7727d86cc1e47d23e3a8ebab4f972e1af6ef5da8e7eb39a8f1b822f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:15:24.072689 containerd[1560]: time="2025-09-13T10:15:24.072650849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-htfg4,Uid:ca9ef4ef-260c-4f06-8856-56f03a32e0ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"38e0c5ae0ee0d7bb180844fe6407ee6facad88658b8c713f3b86d70b24d82e1b\"" Sep 13 10:15:24.073373 kubelet[2783]: E0913 10:15:24.073305 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:24.079662 containerd[1560]: time="2025-09-13T10:15:24.079627204Z" level=info msg="Container 6dc21608d494c2102236e4ce3d0cc2fdb05369c1f63fa307ec7a6cab82b48122: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:24.082568 containerd[1560]: time="2025-09-13T10:15:24.081541246Z" level=info msg="CreateContainer within sandbox \"38e0c5ae0ee0d7bb180844fe6407ee6facad88658b8c713f3b86d70b24d82e1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 10:15:24.102339 containerd[1560]: time="2025-09-13T10:15:24.102298756Z" level=info msg="CreateContainer within sandbox \"b60f1af7a7727d86cc1e47d23e3a8ebab4f972e1af6ef5da8e7eb39a8f1b822f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6dc21608d494c2102236e4ce3d0cc2fdb05369c1f63fa307ec7a6cab82b48122\"" Sep 13 10:15:24.103668 containerd[1560]: time="2025-09-13T10:15:24.103640776Z" level=info msg="StartContainer for \"6dc21608d494c2102236e4ce3d0cc2fdb05369c1f63fa307ec7a6cab82b48122\"" Sep 13 10:15:24.103668 containerd[1560]: time="2025-09-13T10:15:24.103664941Z" level=info msg="Container 62a364afd7fabb2e3d480504c1fd4a1726895d0af596a170f80c049e2268a438: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:15:24.105163 containerd[1560]: time="2025-09-13T10:15:24.105114061Z" level=info msg="connecting to shim 6dc21608d494c2102236e4ce3d0cc2fdb05369c1f63fa307ec7a6cab82b48122" address="unix:///run/containerd/s/da3ea2658a18af9a17753e3507f9c48fba1d7d57db05611a463a623fc68be72a" protocol=ttrpc version=3 Sep 13 10:15:24.111012 containerd[1560]: time="2025-09-13T10:15:24.110961186Z" level=info msg="CreateContainer within sandbox \"38e0c5ae0ee0d7bb180844fe6407ee6facad88658b8c713f3b86d70b24d82e1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62a364afd7fabb2e3d480504c1fd4a1726895d0af596a170f80c049e2268a438\"" Sep 13 10:15:24.117526 containerd[1560]: time="2025-09-13T10:15:24.116552199Z" level=info msg="StartContainer for \"62a364afd7fabb2e3d480504c1fd4a1726895d0af596a170f80c049e2268a438\"" Sep 13 10:15:24.120809 containerd[1560]: time="2025-09-13T10:15:24.120757502Z" level=info msg="connecting to shim 62a364afd7fabb2e3d480504c1fd4a1726895d0af596a170f80c049e2268a438" address="unix:///run/containerd/s/e728683b032a6addd65d6a05c5a4c0e6b23fb3027039d3274e33fc74d17fc85f" protocol=ttrpc version=3 Sep 13 10:15:24.132788 systemd[1]: Started cri-containerd-6dc21608d494c2102236e4ce3d0cc2fdb05369c1f63fa307ec7a6cab82b48122.scope - libcontainer container 6dc21608d494c2102236e4ce3d0cc2fdb05369c1f63fa307ec7a6cab82b48122. Sep 13 10:15:24.148664 systemd[1]: Started cri-containerd-62a364afd7fabb2e3d480504c1fd4a1726895d0af596a170f80c049e2268a438.scope - libcontainer container 62a364afd7fabb2e3d480504c1fd4a1726895d0af596a170f80c049e2268a438. Sep 13 10:15:24.205097 containerd[1560]: time="2025-09-13T10:15:24.205052807Z" level=info msg="StartContainer for \"6dc21608d494c2102236e4ce3d0cc2fdb05369c1f63fa307ec7a6cab82b48122\" returns successfully" Sep 13 10:15:24.205267 containerd[1560]: time="2025-09-13T10:15:24.205157363Z" level=info msg="StartContainer for \"62a364afd7fabb2e3d480504c1fd4a1726895d0af596a170f80c049e2268a438\" returns successfully" Sep 13 10:15:24.546967 kubelet[2783]: E0913 10:15:24.546870 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:24.548479 kubelet[2783]: E0913 10:15:24.548452 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:24.873323 kubelet[2783]: I0913 10:15:24.872798 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wjpvh" podStartSLOduration=36.872779246 podStartE2EDuration="36.872779246s" podCreationTimestamp="2025-09-13 10:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:15:24.860304632 +0000 UTC m=+42.200554354" watchObservedRunningTime="2025-09-13 10:15:24.872779246 +0000 UTC m=+42.213028968" Sep 13 10:15:24.960007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875567267.mount: Deactivated successfully. Sep 13 10:15:25.561145 kubelet[2783]: E0913 10:15:25.561090 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:25.561145 kubelet[2783]: E0913 10:15:25.561139 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:25.572125 kubelet[2783]: I0913 10:15:25.571989 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-htfg4" podStartSLOduration=37.571970989 podStartE2EDuration="37.571970989s" podCreationTimestamp="2025-09-13 10:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:15:24.87340469 +0000 UTC m=+42.213654412" watchObservedRunningTime="2025-09-13 10:15:25.571970989 +0000 UTC m=+42.912220711" Sep 13 10:15:25.909689 systemd[1]: Started sshd@11-10.0.0.19:22-10.0.0.1:46520.service - OpenSSH per-connection server daemon (10.0.0.1:46520). Sep 13 10:15:25.981602 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 46520 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:25.983256 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:25.988112 systemd-logind[1542]: New session 12 of user core. Sep 13 10:15:26.003786 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 10:15:26.121792 sshd[4156]: Connection closed by 10.0.0.1 port 46520 Sep 13 10:15:26.122133 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:26.126110 systemd[1]: sshd@11-10.0.0.19:22-10.0.0.1:46520.service: Deactivated successfully. Sep 13 10:15:26.128283 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 10:15:26.129859 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Sep 13 10:15:26.130895 systemd-logind[1542]: Removed session 12. Sep 13 10:15:26.562970 kubelet[2783]: E0913 10:15:26.562930 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:26.563403 kubelet[2783]: E0913 10:15:26.563074 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:31.139291 systemd[1]: Started sshd@12-10.0.0.19:22-10.0.0.1:38920.service - OpenSSH per-connection server daemon (10.0.0.1:38920). Sep 13 10:15:31.192575 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 38920 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:31.194034 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:31.198985 systemd-logind[1542]: New session 13 of user core. Sep 13 10:15:31.219807 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 10:15:31.441713 sshd[4176]: Connection closed by 10.0.0.1 port 38920 Sep 13 10:15:31.441978 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:31.446959 systemd[1]: sshd@12-10.0.0.19:22-10.0.0.1:38920.service: Deactivated successfully. Sep 13 10:15:31.449051 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 10:15:31.450072 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Sep 13 10:15:31.451923 systemd-logind[1542]: Removed session 13. Sep 13 10:15:36.459278 systemd[1]: Started sshd@13-10.0.0.19:22-10.0.0.1:38930.service - OpenSSH per-connection server daemon (10.0.0.1:38930). Sep 13 10:15:36.513112 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 38930 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:36.514616 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:36.519280 systemd-logind[1542]: New session 14 of user core. Sep 13 10:15:36.534669 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 10:15:36.640712 sshd[4193]: Connection closed by 10.0.0.1 port 38930 Sep 13 10:15:36.641087 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:36.652948 systemd[1]: sshd@13-10.0.0.19:22-10.0.0.1:38930.service: Deactivated successfully. Sep 13 10:15:36.655292 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 10:15:36.656196 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Sep 13 10:15:36.659921 systemd[1]: Started sshd@14-10.0.0.19:22-10.0.0.1:38934.service - OpenSSH per-connection server daemon (10.0.0.1:38934). Sep 13 10:15:36.660672 systemd-logind[1542]: Removed session 14. Sep 13 10:15:36.714985 sshd[4207]: Accepted publickey for core from 10.0.0.1 port 38934 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:36.716341 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:36.721826 systemd-logind[1542]: New session 15 of user core. Sep 13 10:15:36.731727 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 10:15:37.009302 sshd[4210]: Connection closed by 10.0.0.1 port 38934 Sep 13 10:15:37.009829 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:37.019023 systemd[1]: sshd@14-10.0.0.19:22-10.0.0.1:38934.service: Deactivated successfully. Sep 13 10:15:37.023619 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 10:15:37.025023 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Sep 13 10:15:37.030493 systemd[1]: Started sshd@15-10.0.0.19:22-10.0.0.1:38946.service - OpenSSH per-connection server daemon (10.0.0.1:38946). Sep 13 10:15:37.031335 systemd-logind[1542]: Removed session 15. Sep 13 10:15:37.097297 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 38946 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:37.098965 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:37.103776 systemd-logind[1542]: New session 16 of user core. Sep 13 10:15:37.120683 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 10:15:37.230709 sshd[4225]: Connection closed by 10.0.0.1 port 38946 Sep 13 10:15:37.231089 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:37.235692 systemd[1]: sshd@15-10.0.0.19:22-10.0.0.1:38946.service: Deactivated successfully. Sep 13 10:15:37.237555 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 10:15:37.238335 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Sep 13 10:15:37.239348 systemd-logind[1542]: Removed session 16. Sep 13 10:15:42.256877 systemd[1]: Started sshd@16-10.0.0.19:22-10.0.0.1:32872.service - OpenSSH per-connection server daemon (10.0.0.1:32872). Sep 13 10:15:42.317214 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 32872 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:42.319063 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:42.324420 systemd-logind[1542]: New session 17 of user core. Sep 13 10:15:42.333664 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 10:15:42.468931 sshd[4241]: Connection closed by 10.0.0.1 port 32872 Sep 13 10:15:42.469375 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:42.474553 systemd[1]: sshd@16-10.0.0.19:22-10.0.0.1:32872.service: Deactivated successfully. Sep 13 10:15:42.476914 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 10:15:42.477883 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Sep 13 10:15:42.479420 systemd-logind[1542]: Removed session 17. Sep 13 10:15:47.484269 systemd[1]: Started sshd@17-10.0.0.19:22-10.0.0.1:32886.service - OpenSSH per-connection server daemon (10.0.0.1:32886). Sep 13 10:15:47.557065 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 32886 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:47.558908 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:47.564075 systemd-logind[1542]: New session 18 of user core. Sep 13 10:15:47.575688 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 10:15:47.700441 sshd[4260]: Connection closed by 10.0.0.1 port 32886 Sep 13 10:15:47.700783 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:47.705200 systemd[1]: sshd@17-10.0.0.19:22-10.0.0.1:32886.service: Deactivated successfully. Sep 13 10:15:47.707716 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 10:15:47.708656 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Sep 13 10:15:47.710153 systemd-logind[1542]: Removed session 18. Sep 13 10:15:52.718398 systemd[1]: Started sshd@18-10.0.0.19:22-10.0.0.1:36298.service - OpenSSH per-connection server daemon (10.0.0.1:36298). Sep 13 10:15:52.782981 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 36298 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:52.785012 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:52.789569 systemd-logind[1542]: New session 19 of user core. Sep 13 10:15:52.807677 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 10:15:52.927793 sshd[4278]: Connection closed by 10.0.0.1 port 36298 Sep 13 10:15:52.929933 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:52.944075 systemd[1]: sshd@18-10.0.0.19:22-10.0.0.1:36298.service: Deactivated successfully. Sep 13 10:15:52.946441 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 10:15:52.947453 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Sep 13 10:15:52.951469 systemd[1]: Started sshd@19-10.0.0.19:22-10.0.0.1:36300.service - OpenSSH per-connection server daemon (10.0.0.1:36300). Sep 13 10:15:52.952206 systemd-logind[1542]: Removed session 19. Sep 13 10:15:53.022290 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 36300 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:53.024182 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:53.029247 systemd-logind[1542]: New session 20 of user core. Sep 13 10:15:53.038726 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 10:15:53.395062 sshd[4295]: Connection closed by 10.0.0.1 port 36300 Sep 13 10:15:53.395431 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:53.412852 systemd[1]: sshd@19-10.0.0.19:22-10.0.0.1:36300.service: Deactivated successfully. Sep 13 10:15:53.415176 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 10:15:53.415962 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Sep 13 10:15:53.418949 systemd[1]: Started sshd@20-10.0.0.19:22-10.0.0.1:36308.service - OpenSSH per-connection server daemon (10.0.0.1:36308). Sep 13 10:15:53.419749 systemd-logind[1542]: Removed session 20. Sep 13 10:15:53.471301 sshd[4306]: Accepted publickey for core from 10.0.0.1 port 36308 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:53.473092 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:53.478614 systemd-logind[1542]: New session 21 of user core. Sep 13 10:15:53.493706 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 10:15:54.100602 sshd[4309]: Connection closed by 10.0.0.1 port 36308 Sep 13 10:15:54.098875 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:54.117350 systemd[1]: sshd@20-10.0.0.19:22-10.0.0.1:36308.service: Deactivated successfully. Sep 13 10:15:54.121428 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 10:15:54.125044 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Sep 13 10:15:54.127389 systemd[1]: Started sshd@21-10.0.0.19:22-10.0.0.1:36324.service - OpenSSH per-connection server daemon (10.0.0.1:36324). Sep 13 10:15:54.128828 systemd-logind[1542]: Removed session 21. Sep 13 10:15:54.193535 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 36324 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:54.194829 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:54.199353 systemd-logind[1542]: New session 22 of user core. Sep 13 10:15:54.218652 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 10:15:54.521519 sshd[4333]: Connection closed by 10.0.0.1 port 36324 Sep 13 10:15:54.523269 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:54.531361 systemd[1]: sshd@21-10.0.0.19:22-10.0.0.1:36324.service: Deactivated successfully. Sep 13 10:15:54.533638 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 10:15:54.534626 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Sep 13 10:15:54.537929 systemd[1]: Started sshd@22-10.0.0.19:22-10.0.0.1:36328.service - OpenSSH per-connection server daemon (10.0.0.1:36328). Sep 13 10:15:54.538742 systemd-logind[1542]: Removed session 22. Sep 13 10:15:54.600985 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 36328 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:54.602271 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:54.606723 systemd-logind[1542]: New session 23 of user core. Sep 13 10:15:54.616712 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 10:15:54.732759 sshd[4347]: Connection closed by 10.0.0.1 port 36328 Sep 13 10:15:54.733163 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:54.738160 systemd[1]: sshd@22-10.0.0.19:22-10.0.0.1:36328.service: Deactivated successfully. Sep 13 10:15:54.740676 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 10:15:54.741456 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Sep 13 10:15:54.743201 systemd-logind[1542]: Removed session 23. Sep 13 10:15:54.769096 kubelet[2783]: E0913 10:15:54.769032 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:58.768664 kubelet[2783]: E0913 10:15:58.768617 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:15:59.750754 systemd[1]: Started sshd@23-10.0.0.19:22-10.0.0.1:36340.service - OpenSSH per-connection server daemon (10.0.0.1:36340). Sep 13 10:15:59.805857 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 36340 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:15:59.807427 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:15:59.812346 systemd-logind[1542]: New session 24 of user core. Sep 13 10:15:59.822636 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 10:15:59.928948 sshd[4363]: Connection closed by 10.0.0.1 port 36340 Sep 13 10:15:59.929302 sshd-session[4360]: pam_unix(sshd:session): session closed for user core Sep 13 10:15:59.933675 systemd[1]: sshd@23-10.0.0.19:22-10.0.0.1:36340.service: Deactivated successfully. Sep 13 10:15:59.935923 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 10:15:59.936872 systemd-logind[1542]: Session 24 logged out. Waiting for processes to exit. Sep 13 10:15:59.938161 systemd-logind[1542]: Removed session 24. Sep 13 10:16:02.768871 kubelet[2783]: E0913 10:16:02.768835 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:04.946941 systemd[1]: Started sshd@24-10.0.0.19:22-10.0.0.1:50412.service - OpenSSH per-connection server daemon (10.0.0.1:50412). Sep 13 10:16:05.008017 sshd[4379]: Accepted publickey for core from 10.0.0.1 port 50412 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:16:05.009417 sshd-session[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:05.013699 systemd-logind[1542]: New session 25 of user core. Sep 13 10:16:05.030641 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 10:16:05.148668 sshd[4382]: Connection closed by 10.0.0.1 port 50412 Sep 13 10:16:05.149523 sshd-session[4379]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:05.154845 systemd-logind[1542]: Session 25 logged out. Waiting for processes to exit. Sep 13 10:16:05.155021 systemd[1]: sshd@24-10.0.0.19:22-10.0.0.1:50412.service: Deactivated successfully. Sep 13 10:16:05.157713 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 10:16:05.160647 systemd-logind[1542]: Removed session 25. Sep 13 10:16:10.166127 systemd[1]: Started sshd@25-10.0.0.19:22-10.0.0.1:59280.service - OpenSSH per-connection server daemon (10.0.0.1:59280). Sep 13 10:16:10.222522 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 59280 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:16:10.224244 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:10.229195 systemd-logind[1542]: New session 26 of user core. Sep 13 10:16:10.242770 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 10:16:10.358043 sshd[4399]: Connection closed by 10.0.0.1 port 59280 Sep 13 10:16:10.358408 sshd-session[4396]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:10.372269 systemd[1]: sshd@25-10.0.0.19:22-10.0.0.1:59280.service: Deactivated successfully. Sep 13 10:16:10.374122 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 10:16:10.374907 systemd-logind[1542]: Session 26 logged out. Waiting for processes to exit. Sep 13 10:16:10.377538 systemd[1]: Started sshd@26-10.0.0.19:22-10.0.0.1:59292.service - OpenSSH per-connection server daemon (10.0.0.1:59292). Sep 13 10:16:10.378261 systemd-logind[1542]: Removed session 26. Sep 13 10:16:10.443491 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 59292 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:16:10.445036 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:10.449673 systemd-logind[1542]: New session 27 of user core. Sep 13 10:16:10.464672 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 10:16:11.806017 containerd[1560]: time="2025-09-13T10:16:11.805961301Z" level=info msg="StopContainer for \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" with timeout 30 (s)" Sep 13 10:16:11.818924 containerd[1560]: time="2025-09-13T10:16:11.818881512Z" level=info msg="Stop container \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" with signal terminated" Sep 13 10:16:11.834409 systemd[1]: cri-containerd-6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd.scope: Deactivated successfully. Sep 13 10:16:11.837976 containerd[1560]: time="2025-09-13T10:16:11.837201450Z" level=info msg="received exit event container_id:\"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" id:\"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" pid:3543 exited_at:{seconds:1757758571 nanos:836857487}" Sep 13 10:16:11.837976 containerd[1560]: time="2025-09-13T10:16:11.837251906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" id:\"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" pid:3543 exited_at:{seconds:1757758571 nanos:836857487}" Sep 13 10:16:11.841641 containerd[1560]: time="2025-09-13T10:16:11.841604056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" id:\"4a2f1ce1fdd0610c36878e57aedfa9d153acb8b86c741867fa43afe5f07dd7dd\" pid:4435 exited_at:{seconds:1757758571 nanos:841321459}" Sep 13 10:16:11.843527 containerd[1560]: time="2025-09-13T10:16:11.843483111Z" level=info msg="StopContainer for \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" with timeout 2 (s)" Sep 13 10:16:11.843840 containerd[1560]: time="2025-09-13T10:16:11.843821975Z" level=info msg="Stop container \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" with signal terminated" Sep 13 10:16:11.845602 containerd[1560]: time="2025-09-13T10:16:11.844902716Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 10:16:11.851875 systemd-networkd[1476]: lxc_health: Link DOWN Sep 13 10:16:11.851886 systemd-networkd[1476]: lxc_health: Lost carrier Sep 13 10:16:11.872776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd-rootfs.mount: Deactivated successfully. Sep 13 10:16:11.874266 systemd[1]: cri-containerd-dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b.scope: Deactivated successfully. Sep 13 10:16:11.874931 containerd[1560]: time="2025-09-13T10:16:11.874879478Z" level=info msg="received exit event container_id:\"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" id:\"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" pid:3401 exited_at:{seconds:1757758571 nanos:874390419}" Sep 13 10:16:11.875042 systemd[1]: cri-containerd-dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b.scope: Consumed 7.044s CPU time, 126.6M memory peak, 220K read from disk, 13.3M written to disk. Sep 13 10:16:11.876175 containerd[1560]: time="2025-09-13T10:16:11.876140711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" id:\"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" pid:3401 exited_at:{seconds:1757758571 nanos:874390419}" Sep 13 10:16:11.899428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b-rootfs.mount: Deactivated successfully. Sep 13 10:16:11.903422 containerd[1560]: time="2025-09-13T10:16:11.903197641Z" level=info msg="StopContainer for \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" returns successfully" Sep 13 10:16:11.906682 containerd[1560]: time="2025-09-13T10:16:11.906631027Z" level=info msg="StopPodSandbox for \"31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2\"" Sep 13 10:16:11.906855 containerd[1560]: time="2025-09-13T10:16:11.906700590Z" level=info msg="Container to stop \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:11.914443 containerd[1560]: time="2025-09-13T10:16:11.914399792Z" level=info msg="StopContainer for \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" returns successfully" Sep 13 10:16:11.914858 systemd[1]: cri-containerd-31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2.scope: Deactivated successfully. Sep 13 10:16:11.915974 containerd[1560]: time="2025-09-13T10:16:11.915945084Z" level=info msg="StopPodSandbox for \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\"" Sep 13 10:16:11.916031 containerd[1560]: time="2025-09-13T10:16:11.916008294Z" level=info msg="Container to stop \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:11.916031 containerd[1560]: time="2025-09-13T10:16:11.916020817Z" level=info msg="Container to stop \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:11.916092 containerd[1560]: time="2025-09-13T10:16:11.916030717Z" level=info msg="Container to stop \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:11.916092 containerd[1560]: time="2025-09-13T10:16:11.916039774Z" level=info msg="Container to stop \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:11.916092 containerd[1560]: time="2025-09-13T10:16:11.916048851Z" level=info msg="Container to stop \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 10:16:11.920296 containerd[1560]: time="2025-09-13T10:16:11.920249814Z" level=info msg="TaskExit event in podsandbox handler container_id:\"31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2\" id:\"31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2\" pid:3016 exit_status:137 exited_at:{seconds:1757758571 nanos:919740016}" Sep 13 10:16:11.923848 systemd[1]: cri-containerd-1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b.scope: Deactivated successfully. Sep 13 10:16:11.948917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b-rootfs.mount: Deactivated successfully. Sep 13 10:16:11.953161 containerd[1560]: time="2025-09-13T10:16:11.953121940Z" level=info msg="shim disconnected" id=1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b namespace=k8s.io Sep 13 10:16:11.953161 containerd[1560]: time="2025-09-13T10:16:11.953154983Z" level=warning msg="cleaning up after shim disconnected" id=1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b namespace=k8s.io Sep 13 10:16:11.955149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2-rootfs.mount: Deactivated successfully. Sep 13 10:16:11.993894 containerd[1560]: time="2025-09-13T10:16:11.953162207Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 10:16:11.994164 containerd[1560]: time="2025-09-13T10:16:11.962222892Z" level=info msg="shim disconnected" id=31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2 namespace=k8s.io Sep 13 10:16:11.994164 containerd[1560]: time="2025-09-13T10:16:11.993947612Z" level=warning msg="cleaning up after shim disconnected" id=31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2 namespace=k8s.io Sep 13 10:16:11.994164 containerd[1560]: time="2025-09-13T10:16:11.993957901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 10:16:12.023666 containerd[1560]: time="2025-09-13T10:16:12.023496199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" id:\"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" pid:2943 exit_status:137 exited_at:{seconds:1757758571 nanos:925580241}" Sep 13 10:16:12.025176 containerd[1560]: time="2025-09-13T10:16:12.024119973Z" level=info msg="TearDown network for sandbox \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" successfully" Sep 13 10:16:12.025176 containerd[1560]: time="2025-09-13T10:16:12.024138848Z" level=info msg="StopPodSandbox for \"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" returns successfully" Sep 13 10:16:12.027687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2-shm.mount: Deactivated successfully. Sep 13 10:16:12.027839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b-shm.mount: Deactivated successfully. Sep 13 10:16:12.034249 containerd[1560]: time="2025-09-13T10:16:12.034201301Z" level=info msg="received exit event sandbox_id:\"31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2\" exit_status:137 exited_at:{seconds:1757758571 nanos:919740016}" Sep 13 10:16:12.035099 containerd[1560]: time="2025-09-13T10:16:12.034449221Z" level=info msg="received exit event sandbox_id:\"1b1f2d9a7b7617439e1e985e715a9b0da831b9224e7fd28b8985b2833c34350b\" exit_status:137 exited_at:{seconds:1757758571 nanos:925580241}" Sep 13 10:16:12.035099 containerd[1560]: time="2025-09-13T10:16:12.034901470Z" level=info msg="TearDown network for sandbox \"31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2\" successfully" Sep 13 10:16:12.035099 containerd[1560]: time="2025-09-13T10:16:12.034930565Z" level=info msg="StopPodSandbox for \"31a34b9259364c5a7ea43e3ffc2f3a175cb48582dc7dec2fdd39e8c7aaf85fb2\" returns successfully" Sep 13 10:16:12.136959 kubelet[2783]: I0913 10:16:12.136774 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-etc-cni-netd\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.136959 kubelet[2783]: I0913 10:16:12.136870 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-run\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.136959 kubelet[2783]: I0913 10:16:12.136891 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-bpf-maps\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.136959 kubelet[2783]: I0913 10:16:12.136908 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-cgroup\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.136959 kubelet[2783]: I0913 10:16:12.136885 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:16:12.136959 kubelet[2783]: I0913 10:16:12.136908 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:16:12.137687 kubelet[2783]: I0913 10:16:12.136928 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:16:12.137687 kubelet[2783]: I0913 10:16:12.136935 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-clustermesh-secrets\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.137687 kubelet[2783]: I0913 10:16:12.137000 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-hostproc\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.137687 kubelet[2783]: I0913 10:16:12.137024 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cni-path\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.137687 kubelet[2783]: I0913 10:16:12.137083 2783 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.137687 kubelet[2783]: I0913 10:16:12.137099 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.137687 kubelet[2783]: I0913 10:16:12.137114 2783 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.137930 kubelet[2783]: I0913 10:16:12.137139 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cni-path" (OuterVolumeSpecName: "cni-path") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:16:12.137930 kubelet[2783]: I0913 10:16:12.137165 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-hostproc" (OuterVolumeSpecName: "hostproc") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:16:12.137930 kubelet[2783]: I0913 10:16:12.137193 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:16:12.141623 kubelet[2783]: I0913 10:16:12.141581 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 10:16:12.238349 kubelet[2783]: I0913 10:16:12.238178 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-xtables-lock\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.238349 kubelet[2783]: I0913 10:16:12.238245 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-host-proc-sys-net\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.238349 kubelet[2783]: I0913 10:16:12.238281 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djscd\" (UniqueName: \"kubernetes.io/projected/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-kube-api-access-djscd\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.238349 kubelet[2783]: I0913 10:16:12.238306 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61458706-555f-4e09-a660-0d5320dabd20-cilium-config-path\") pod \"61458706-555f-4e09-a660-0d5320dabd20\" (UID: \"61458706-555f-4e09-a660-0d5320dabd20\") " Sep 13 10:16:12.238349 kubelet[2783]: I0913 10:16:12.238246 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:16:12.238349 kubelet[2783]: I0913 10:16:12.238333 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z9pf\" (UniqueName: \"kubernetes.io/projected/61458706-555f-4e09-a660-0d5320dabd20-kube-api-access-6z9pf\") pod \"61458706-555f-4e09-a660-0d5320dabd20\" (UID: \"61458706-555f-4e09-a660-0d5320dabd20\") " Sep 13 10:16:12.238713 kubelet[2783]: I0913 10:16:12.238357 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-config-path\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.238713 kubelet[2783]: I0913 10:16:12.238365 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:16:12.238713 kubelet[2783]: I0913 10:16:12.238381 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-hubble-tls\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.238713 kubelet[2783]: I0913 10:16:12.238406 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-host-proc-sys-kernel\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.238713 kubelet[2783]: I0913 10:16:12.238430 2783 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-lib-modules\") pod \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\" (UID: \"b7bbb2c5-a755-4dd6-849d-87ed55f753a2\") " Sep 13 10:16:12.238713 kubelet[2783]: I0913 10:16:12.238465 2783 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.238849 kubelet[2783]: I0913 10:16:12.238480 2783 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.238849 kubelet[2783]: I0913 10:16:12.238492 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.238849 kubelet[2783]: I0913 10:16:12.238532 2783 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.238849 kubelet[2783]: I0913 10:16:12.238545 2783 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.238849 kubelet[2783]: I0913 10:16:12.238555 2783 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.238849 kubelet[2783]: I0913 10:16:12.238582 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:16:12.242419 kubelet[2783]: I0913 10:16:12.242299 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61458706-555f-4e09-a660-0d5320dabd20-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61458706-555f-4e09-a660-0d5320dabd20" (UID: "61458706-555f-4e09-a660-0d5320dabd20"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 10:16:12.242419 kubelet[2783]: I0913 10:16:12.242319 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 10:16:12.242419 kubelet[2783]: I0913 10:16:12.242359 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 10:16:12.242753 kubelet[2783]: I0913 10:16:12.242732 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-kube-api-access-djscd" (OuterVolumeSpecName: "kube-api-access-djscd") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "kube-api-access-djscd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 10:16:12.242835 kubelet[2783]: I0913 10:16:12.242756 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61458706-555f-4e09-a660-0d5320dabd20-kube-api-access-6z9pf" (OuterVolumeSpecName: "kube-api-access-6z9pf") pod "61458706-555f-4e09-a660-0d5320dabd20" (UID: "61458706-555f-4e09-a660-0d5320dabd20"). InnerVolumeSpecName "kube-api-access-6z9pf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 10:16:12.244943 kubelet[2783]: I0913 10:16:12.244915 2783 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b7bbb2c5-a755-4dd6-849d-87ed55f753a2" (UID: "b7bbb2c5-a755-4dd6-849d-87ed55f753a2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 10:16:12.339341 kubelet[2783]: I0913 10:16:12.339283 2783 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.339341 kubelet[2783]: I0913 10:16:12.339324 2783 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-djscd\" (UniqueName: \"kubernetes.io/projected/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-kube-api-access-djscd\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.339341 kubelet[2783]: I0913 10:16:12.339337 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61458706-555f-4e09-a660-0d5320dabd20-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.339341 kubelet[2783]: I0913 10:16:12.339346 2783 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6z9pf\" (UniqueName: \"kubernetes.io/projected/61458706-555f-4e09-a660-0d5320dabd20-kube-api-access-6z9pf\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.339341 kubelet[2783]: I0913 10:16:12.339358 2783 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.339636 kubelet[2783]: I0913 10:16:12.339367 2783 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.339636 kubelet[2783]: I0913 10:16:12.339376 2783 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b7bbb2c5-a755-4dd6-849d-87ed55f753a2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 10:16:12.659832 kubelet[2783]: I0913 10:16:12.658696 2783 scope.go:117] "RemoveContainer" containerID="6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd" Sep 13 10:16:12.661989 containerd[1560]: time="2025-09-13T10:16:12.661942673Z" level=info msg="RemoveContainer for \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\"" Sep 13 10:16:12.665764 systemd[1]: Removed slice kubepods-besteffort-pod61458706_555f_4e09_a660_0d5320dabd20.slice - libcontainer container kubepods-besteffort-pod61458706_555f_4e09_a660_0d5320dabd20.slice. Sep 13 10:16:12.675183 systemd[1]: Removed slice kubepods-burstable-podb7bbb2c5_a755_4dd6_849d_87ed55f753a2.slice - libcontainer container kubepods-burstable-podb7bbb2c5_a755_4dd6_849d_87ed55f753a2.slice. Sep 13 10:16:12.675305 systemd[1]: kubepods-burstable-podb7bbb2c5_a755_4dd6_849d_87ed55f753a2.slice: Consumed 7.169s CPU time, 127M memory peak, 283K read from disk, 13.3M written to disk. Sep 13 10:16:12.697750 containerd[1560]: time="2025-09-13T10:16:12.697688616Z" level=info msg="RemoveContainer for \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" returns successfully" Sep 13 10:16:12.698068 kubelet[2783]: I0913 10:16:12.698023 2783 scope.go:117] "RemoveContainer" containerID="6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd" Sep 13 10:16:12.707279 containerd[1560]: time="2025-09-13T10:16:12.698334722Z" level=error msg="ContainerStatus for \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\": not found" Sep 13 10:16:12.708940 kubelet[2783]: E0913 10:16:12.708896 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\": not found" containerID="6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd" Sep 13 10:16:12.709003 kubelet[2783]: I0913 10:16:12.708945 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd"} err="failed to get container status \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f01272e61adb4b448b106fbb81687a1840d52c98362790054f080ce693287cd\": not found" Sep 13 10:16:12.709003 kubelet[2783]: I0913 10:16:12.708989 2783 scope.go:117] "RemoveContainer" containerID="dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b" Sep 13 10:16:12.711227 containerd[1560]: time="2025-09-13T10:16:12.711166870Z" level=info msg="RemoveContainer for \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\"" Sep 13 10:16:12.716259 containerd[1560]: time="2025-09-13T10:16:12.716218905Z" level=info msg="RemoveContainer for \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" returns successfully" Sep 13 10:16:12.716408 kubelet[2783]: I0913 10:16:12.716363 2783 scope.go:117] "RemoveContainer" containerID="886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc" Sep 13 10:16:12.717644 containerd[1560]: time="2025-09-13T10:16:12.717607209Z" level=info msg="RemoveContainer for \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\"" Sep 13 10:16:12.722047 containerd[1560]: time="2025-09-13T10:16:12.722007278Z" level=info msg="RemoveContainer for \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\" returns successfully" Sep 13 10:16:12.722266 kubelet[2783]: I0913 10:16:12.722221 2783 scope.go:117] "RemoveContainer" containerID="8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566" Sep 13 10:16:12.724711 containerd[1560]: time="2025-09-13T10:16:12.724678326Z" level=info msg="RemoveContainer for \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\"" Sep 13 10:16:12.728697 containerd[1560]: time="2025-09-13T10:16:12.728660231Z" level=info msg="RemoveContainer for \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\" returns successfully" Sep 13 10:16:12.728840 kubelet[2783]: I0913 10:16:12.728811 2783 scope.go:117] "RemoveContainer" containerID="8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5" Sep 13 10:16:12.730180 containerd[1560]: time="2025-09-13T10:16:12.730145518Z" level=info msg="RemoveContainer for \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\"" Sep 13 10:16:12.734289 containerd[1560]: time="2025-09-13T10:16:12.734258092Z" level=info msg="RemoveContainer for \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\" returns successfully" Sep 13 10:16:12.734526 kubelet[2783]: I0913 10:16:12.734433 2783 scope.go:117] "RemoveContainer" containerID="68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738" Sep 13 10:16:12.735809 containerd[1560]: time="2025-09-13T10:16:12.735773767Z" level=info msg="RemoveContainer for \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\"" Sep 13 10:16:12.739167 containerd[1560]: time="2025-09-13T10:16:12.739126359Z" level=info msg="RemoveContainer for \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\" returns successfully" Sep 13 10:16:12.739342 kubelet[2783]: I0913 10:16:12.739305 2783 scope.go:117] "RemoveContainer" containerID="dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b" Sep 13 10:16:12.739527 containerd[1560]: time="2025-09-13T10:16:12.739473678Z" level=error msg="ContainerStatus for \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\": not found" Sep 13 10:16:12.739673 kubelet[2783]: E0913 10:16:12.739635 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\": not found" containerID="dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b" Sep 13 10:16:12.739673 kubelet[2783]: I0913 10:16:12.739663 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b"} err="failed to get container status \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"dab87a2990198e44169ac0dd8f49a465b6dfb357244dda0bd27c199a4ece7b2b\": not found" Sep 13 10:16:12.739771 kubelet[2783]: I0913 10:16:12.739681 2783 scope.go:117] "RemoveContainer" containerID="886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc" Sep 13 10:16:12.739928 containerd[1560]: time="2025-09-13T10:16:12.739870260Z" level=error msg="ContainerStatus for \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\": not found" Sep 13 10:16:12.740033 kubelet[2783]: E0913 10:16:12.740009 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\": not found" containerID="886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc" Sep 13 10:16:12.740103 kubelet[2783]: I0913 10:16:12.740033 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc"} err="failed to get container status \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"886fc4a4412e73d10b3be9117463c15226d54c50c648389309c503f1e4dd30fc\": not found" Sep 13 10:16:12.740103 kubelet[2783]: I0913 10:16:12.740046 2783 scope.go:117] "RemoveContainer" containerID="8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566" Sep 13 10:16:12.740278 containerd[1560]: time="2025-09-13T10:16:12.740241876Z" level=error msg="ContainerStatus for \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\": not found" Sep 13 10:16:12.740395 kubelet[2783]: E0913 10:16:12.740374 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\": not found" containerID="8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566" Sep 13 10:16:12.740483 kubelet[2783]: I0913 10:16:12.740464 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566"} err="failed to get container status \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\": rpc error: code = NotFound desc = an error occurred when try to find container \"8380169f83e28d30f165e0cbb7c9729c8427f784074027733f185bc6eb08d566\": not found" Sep 13 10:16:12.740483 kubelet[2783]: I0913 10:16:12.740482 2783 scope.go:117] "RemoveContainer" containerID="8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5" Sep 13 10:16:12.740691 containerd[1560]: time="2025-09-13T10:16:12.740658136Z" level=error msg="ContainerStatus for \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\": not found" Sep 13 10:16:12.740807 kubelet[2783]: E0913 10:16:12.740788 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\": not found" containerID="8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5" Sep 13 10:16:12.740838 kubelet[2783]: I0913 10:16:12.740811 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5"} err="failed to get container status \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d857f75ade53fcf822494d6f9ead4677f0f96ffb55aec6b0e078419975290c5\": not found" Sep 13 10:16:12.740838 kubelet[2783]: I0913 10:16:12.740827 2783 scope.go:117] "RemoveContainer" containerID="68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738" Sep 13 10:16:12.740994 containerd[1560]: time="2025-09-13T10:16:12.740967592Z" level=error msg="ContainerStatus for \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\": not found" Sep 13 10:16:12.741084 kubelet[2783]: E0913 10:16:12.741051 2783 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\": not found" containerID="68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738" Sep 13 10:16:12.741140 kubelet[2783]: I0913 10:16:12.741087 2783 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738"} err="failed to get container status \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\": rpc error: code = NotFound desc = an error occurred when try to find container \"68d06fd673da33eb1116ebe7831800e92969cfa958c0e2e9a3ca9bfdf6fbc738\": not found" Sep 13 10:16:12.770699 kubelet[2783]: I0913 10:16:12.770644 2783 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61458706-555f-4e09-a660-0d5320dabd20" path="/var/lib/kubelet/pods/61458706-555f-4e09-a660-0d5320dabd20/volumes" Sep 13 10:16:12.771279 kubelet[2783]: I0913 10:16:12.771241 2783 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7bbb2c5-a755-4dd6-849d-87ed55f753a2" path="/var/lib/kubelet/pods/b7bbb2c5-a755-4dd6-849d-87ed55f753a2/volumes" Sep 13 10:16:12.836485 kubelet[2783]: E0913 10:16:12.836414 2783 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 10:16:12.873048 systemd[1]: var-lib-kubelet-pods-61458706\x2d555f\x2d4e09\x2da660\x2d0d5320dabd20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6z9pf.mount: Deactivated successfully. Sep 13 10:16:12.873189 systemd[1]: var-lib-kubelet-pods-b7bbb2c5\x2da755\x2d4dd6\x2d849d\x2d87ed55f753a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddjscd.mount: Deactivated successfully. Sep 13 10:16:12.873263 systemd[1]: var-lib-kubelet-pods-b7bbb2c5\x2da755\x2d4dd6\x2d849d\x2d87ed55f753a2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 10:16:12.873334 systemd[1]: var-lib-kubelet-pods-b7bbb2c5\x2da755\x2d4dd6\x2d849d\x2d87ed55f753a2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 10:16:13.769248 sshd[4415]: Connection closed by 10.0.0.1 port 59292 Sep 13 10:16:13.769848 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:13.780044 systemd[1]: sshd@26-10.0.0.19:22-10.0.0.1:59292.service: Deactivated successfully. Sep 13 10:16:13.782364 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 10:16:13.783184 systemd-logind[1542]: Session 27 logged out. Waiting for processes to exit. Sep 13 10:16:13.786729 systemd[1]: Started sshd@27-10.0.0.19:22-10.0.0.1:59296.service - OpenSSH per-connection server daemon (10.0.0.1:59296). Sep 13 10:16:13.787821 systemd-logind[1542]: Removed session 27. Sep 13 10:16:13.840939 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 59296 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:16:13.842212 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:13.846650 systemd-logind[1542]: New session 28 of user core. Sep 13 10:16:13.853642 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 13 10:16:14.263714 sshd[4571]: Connection closed by 10.0.0.1 port 59296 Sep 13 10:16:14.265898 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:14.281105 systemd[1]: sshd@27-10.0.0.19:22-10.0.0.1:59296.service: Deactivated successfully. Sep 13 10:16:14.284044 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 10:16:14.285907 systemd-logind[1542]: Session 28 logged out. Waiting for processes to exit. Sep 13 10:16:14.289799 systemd[1]: Started sshd@28-10.0.0.19:22-10.0.0.1:59300.service - OpenSSH per-connection server daemon (10.0.0.1:59300). Sep 13 10:16:14.293154 systemd-logind[1542]: Removed session 28. Sep 13 10:16:14.312344 systemd[1]: Created slice kubepods-burstable-pod41330002_a671_48be_bdab_4ad1ec10667a.slice - libcontainer container kubepods-burstable-pod41330002_a671_48be_bdab_4ad1ec10667a.slice. Sep 13 10:16:14.348240 kubelet[2783]: I0913 10:16:14.348189 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78g5\" (UniqueName: \"kubernetes.io/projected/41330002-a671-48be-bdab-4ad1ec10667a-kube-api-access-b78g5\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348240 kubelet[2783]: I0913 10:16:14.348226 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/41330002-a671-48be-bdab-4ad1ec10667a-bpf-maps\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348240 kubelet[2783]: I0913 10:16:14.348252 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/41330002-a671-48be-bdab-4ad1ec10667a-cilium-config-path\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348745 kubelet[2783]: I0913 10:16:14.348265 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/41330002-a671-48be-bdab-4ad1ec10667a-host-proc-sys-net\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348745 kubelet[2783]: I0913 10:16:14.348279 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/41330002-a671-48be-bdab-4ad1ec10667a-cilium-run\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348745 kubelet[2783]: I0913 10:16:14.348333 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/41330002-a671-48be-bdab-4ad1ec10667a-cilium-cgroup\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348745 kubelet[2783]: I0913 10:16:14.348362 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/41330002-a671-48be-bdab-4ad1ec10667a-etc-cni-netd\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348745 kubelet[2783]: I0913 10:16:14.348379 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41330002-a671-48be-bdab-4ad1ec10667a-lib-modules\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348745 kubelet[2783]: I0913 10:16:14.348393 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/41330002-a671-48be-bdab-4ad1ec10667a-clustermesh-secrets\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348939 kubelet[2783]: I0913 10:16:14.348409 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/41330002-a671-48be-bdab-4ad1ec10667a-hubble-tls\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348939 kubelet[2783]: I0913 10:16:14.348425 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/41330002-a671-48be-bdab-4ad1ec10667a-hostproc\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348939 kubelet[2783]: I0913 10:16:14.348439 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/41330002-a671-48be-bdab-4ad1ec10667a-cni-path\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348939 kubelet[2783]: I0913 10:16:14.348456 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41330002-a671-48be-bdab-4ad1ec10667a-xtables-lock\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348939 kubelet[2783]: I0913 10:16:14.348472 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/41330002-a671-48be-bdab-4ad1ec10667a-cilium-ipsec-secrets\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.348939 kubelet[2783]: I0913 10:16:14.348486 2783 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/41330002-a671-48be-bdab-4ad1ec10667a-host-proc-sys-kernel\") pod \"cilium-rk54k\" (UID: \"41330002-a671-48be-bdab-4ad1ec10667a\") " pod="kube-system/cilium-rk54k" Sep 13 10:16:14.350445 sshd[4583]: Accepted publickey for core from 10.0.0.1 port 59300 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:16:14.352148 sshd-session[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:14.357264 systemd-logind[1542]: New session 29 of user core. Sep 13 10:16:14.372712 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 13 10:16:14.429166 sshd[4586]: Connection closed by 10.0.0.1 port 59300 Sep 13 10:16:14.429642 sshd-session[4583]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:14.441280 systemd[1]: sshd@28-10.0.0.19:22-10.0.0.1:59300.service: Deactivated successfully. Sep 13 10:16:14.443881 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 10:16:14.444751 systemd-logind[1542]: Session 29 logged out. Waiting for processes to exit. Sep 13 10:16:14.449042 systemd[1]: Started sshd@29-10.0.0.19:22-10.0.0.1:59304.service - OpenSSH per-connection server daemon (10.0.0.1:59304). Sep 13 10:16:14.450585 systemd-logind[1542]: Removed session 29. Sep 13 10:16:14.498765 sshd[4593]: Accepted publickey for core from 10.0.0.1 port 59304 ssh2: RSA SHA256:zcsqT46NGGfuXQOUKdVqBiqQMVWjN6YtLkqFhpEQQJ4 Sep 13 10:16:14.500818 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 10:16:14.505977 systemd-logind[1542]: New session 30 of user core. Sep 13 10:16:14.518644 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 13 10:16:14.616709 kubelet[2783]: E0913 10:16:14.616653 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:14.617541 containerd[1560]: time="2025-09-13T10:16:14.617433545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rk54k,Uid:41330002-a671-48be-bdab-4ad1ec10667a,Namespace:kube-system,Attempt:0,}" Sep 13 10:16:14.642654 containerd[1560]: time="2025-09-13T10:16:14.642594873Z" level=info msg="connecting to shim 7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e" address="unix:///run/containerd/s/7adaa2ef594510e88354041b84b0f40a30423b4357de6824a8d6606c11cdafad" namespace=k8s.io protocol=ttrpc version=3 Sep 13 10:16:14.673795 systemd[1]: Started cri-containerd-7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e.scope - libcontainer container 7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e. Sep 13 10:16:14.700770 containerd[1560]: time="2025-09-13T10:16:14.700721949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rk54k,Uid:41330002-a671-48be-bdab-4ad1ec10667a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\"" Sep 13 10:16:14.701754 kubelet[2783]: E0913 10:16:14.701724 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:14.710635 containerd[1560]: time="2025-09-13T10:16:14.710548735Z" level=info msg="CreateContainer within sandbox \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 10:16:14.718942 containerd[1560]: time="2025-09-13T10:16:14.718865127Z" level=info msg="Container e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:16:14.726820 containerd[1560]: time="2025-09-13T10:16:14.726746414Z" level=info msg="CreateContainer within sandbox \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31\"" Sep 13 10:16:14.727245 containerd[1560]: time="2025-09-13T10:16:14.727216585Z" level=info msg="StartContainer for \"e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31\"" Sep 13 10:16:14.728201 containerd[1560]: time="2025-09-13T10:16:14.728143002Z" level=info msg="connecting to shim e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31" address="unix:///run/containerd/s/7adaa2ef594510e88354041b84b0f40a30423b4357de6824a8d6606c11cdafad" protocol=ttrpc version=3 Sep 13 10:16:14.748674 systemd[1]: Started cri-containerd-e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31.scope - libcontainer container e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31. Sep 13 10:16:14.783653 containerd[1560]: time="2025-09-13T10:16:14.783593151Z" level=info msg="StartContainer for \"e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31\" returns successfully" Sep 13 10:16:14.791096 systemd[1]: cri-containerd-e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31.scope: Deactivated successfully. Sep 13 10:16:14.794362 containerd[1560]: time="2025-09-13T10:16:14.794227148Z" level=info msg="received exit event container_id:\"e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31\" id:\"e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31\" pid:4670 exited_at:{seconds:1757758574 nanos:793903936}" Sep 13 10:16:14.794362 containerd[1560]: time="2025-09-13T10:16:14.794314173Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31\" id:\"e2a7cde957ecb5ce1de5f347aa437714bddd0a6e5437fc5134d0d2d0928e2f31\" pid:4670 exited_at:{seconds:1757758574 nanos:793903936}" Sep 13 10:16:15.677602 kubelet[2783]: E0913 10:16:15.677554 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:15.684398 containerd[1560]: time="2025-09-13T10:16:15.684325338Z" level=info msg="CreateContainer within sandbox \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 10:16:15.693561 containerd[1560]: time="2025-09-13T10:16:15.692452609Z" level=info msg="Container b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:16:15.700909 containerd[1560]: time="2025-09-13T10:16:15.700844970Z" level=info msg="CreateContainer within sandbox \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da\"" Sep 13 10:16:15.701471 containerd[1560]: time="2025-09-13T10:16:15.701421243Z" level=info msg="StartContainer for \"b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da\"" Sep 13 10:16:15.702330 containerd[1560]: time="2025-09-13T10:16:15.702300640Z" level=info msg="connecting to shim b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da" address="unix:///run/containerd/s/7adaa2ef594510e88354041b84b0f40a30423b4357de6824a8d6606c11cdafad" protocol=ttrpc version=3 Sep 13 10:16:15.737628 systemd[1]: Started cri-containerd-b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da.scope - libcontainer container b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da. Sep 13 10:16:15.766176 containerd[1560]: time="2025-09-13T10:16:15.766137196Z" level=info msg="StartContainer for \"b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da\" returns successfully" Sep 13 10:16:15.772959 systemd[1]: cri-containerd-b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da.scope: Deactivated successfully. Sep 13 10:16:15.773282 containerd[1560]: time="2025-09-13T10:16:15.773248489Z" level=info msg="received exit event container_id:\"b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da\" id:\"b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da\" pid:4715 exited_at:{seconds:1757758575 nanos:773074740}" Sep 13 10:16:15.773395 containerd[1560]: time="2025-09-13T10:16:15.773315857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da\" id:\"b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da\" pid:4715 exited_at:{seconds:1757758575 nanos:773074740}" Sep 13 10:16:15.796139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b27a555314e369ef01411bddbe38e55e1c9a28ff5daede0df6f18a22ab0068da-rootfs.mount: Deactivated successfully. Sep 13 10:16:16.014095 kubelet[2783]: I0913 10:16:16.013931 2783 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T10:16:16Z","lastTransitionTime":"2025-09-13T10:16:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 10:16:16.681780 kubelet[2783]: E0913 10:16:16.681742 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:16.686718 containerd[1560]: time="2025-09-13T10:16:16.686670603Z" level=info msg="CreateContainer within sandbox \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 10:16:16.700259 containerd[1560]: time="2025-09-13T10:16:16.700207554Z" level=info msg="Container 52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:16:16.708312 containerd[1560]: time="2025-09-13T10:16:16.708280167Z" level=info msg="CreateContainer within sandbox \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa\"" Sep 13 10:16:16.708996 containerd[1560]: time="2025-09-13T10:16:16.708958722Z" level=info msg="StartContainer for \"52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa\"" Sep 13 10:16:16.711966 containerd[1560]: time="2025-09-13T10:16:16.711787094Z" level=info msg="connecting to shim 52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa" address="unix:///run/containerd/s/7adaa2ef594510e88354041b84b0f40a30423b4357de6824a8d6606c11cdafad" protocol=ttrpc version=3 Sep 13 10:16:16.733632 systemd[1]: Started cri-containerd-52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa.scope - libcontainer container 52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa. Sep 13 10:16:16.772726 containerd[1560]: time="2025-09-13T10:16:16.772681636Z" level=info msg="StartContainer for \"52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa\" returns successfully" Sep 13 10:16:16.774406 systemd[1]: cri-containerd-52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa.scope: Deactivated successfully. Sep 13 10:16:16.777270 containerd[1560]: time="2025-09-13T10:16:16.777212584Z" level=info msg="received exit event container_id:\"52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa\" id:\"52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa\" pid:4759 exited_at:{seconds:1757758576 nanos:776911253}" Sep 13 10:16:16.777788 containerd[1560]: time="2025-09-13T10:16:16.777468881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa\" id:\"52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa\" pid:4759 exited_at:{seconds:1757758576 nanos:776911253}" Sep 13 10:16:16.799552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52dc4803b61ffd261d59e898ddfcceace6a1e56310dc458150c356140b8b5daa-rootfs.mount: Deactivated successfully. Sep 13 10:16:17.689085 kubelet[2783]: E0913 10:16:17.688825 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:17.696416 containerd[1560]: time="2025-09-13T10:16:17.696291160Z" level=info msg="CreateContainer within sandbox \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 10:16:17.707323 containerd[1560]: time="2025-09-13T10:16:17.707271240Z" level=info msg="Container dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:16:17.716455 containerd[1560]: time="2025-09-13T10:16:17.716385965Z" level=info msg="CreateContainer within sandbox \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def\"" Sep 13 10:16:17.717087 containerd[1560]: time="2025-09-13T10:16:17.717063088Z" level=info msg="StartContainer for \"dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def\"" Sep 13 10:16:17.717896 containerd[1560]: time="2025-09-13T10:16:17.717870077Z" level=info msg="connecting to shim dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def" address="unix:///run/containerd/s/7adaa2ef594510e88354041b84b0f40a30423b4357de6824a8d6606c11cdafad" protocol=ttrpc version=3 Sep 13 10:16:17.747624 systemd[1]: Started cri-containerd-dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def.scope - libcontainer container dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def. Sep 13 10:16:17.768602 kubelet[2783]: E0913 10:16:17.768435 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:17.782321 systemd[1]: cri-containerd-dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def.scope: Deactivated successfully. Sep 13 10:16:17.783398 containerd[1560]: time="2025-09-13T10:16:17.782444238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def\" id:\"dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def\" pid:4798 exited_at:{seconds:1757758577 nanos:782208742}" Sep 13 10:16:17.783398 containerd[1560]: time="2025-09-13T10:16:17.783158211Z" level=info msg="received exit event container_id:\"dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def\" id:\"dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def\" pid:4798 exited_at:{seconds:1757758577 nanos:782208742}" Sep 13 10:16:17.791841 containerd[1560]: time="2025-09-13T10:16:17.791794399Z" level=info msg="StartContainer for \"dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def\" returns successfully" Sep 13 10:16:17.809312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd18d73f6cc062678ca802660f1306112ec5a540bbfd86b8df15d2c580cd0def-rootfs.mount: Deactivated successfully. Sep 13 10:16:17.837836 kubelet[2783]: E0913 10:16:17.837747 2783 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 10:16:18.697937 kubelet[2783]: E0913 10:16:18.697444 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:18.705564 containerd[1560]: time="2025-09-13T10:16:18.705467310Z" level=info msg="CreateContainer within sandbox \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 10:16:18.716752 containerd[1560]: time="2025-09-13T10:16:18.716689904Z" level=info msg="Container 04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c: CDI devices from CRI Config.CDIDevices: []" Sep 13 10:16:18.727519 containerd[1560]: time="2025-09-13T10:16:18.727445304Z" level=info msg="CreateContainer within sandbox \"7589636ff0559684dd9d86f4cea51e462fbe055cc6ab820b818edc73ec5d7f3e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c\"" Sep 13 10:16:18.728058 containerd[1560]: time="2025-09-13T10:16:18.728014452Z" level=info msg="StartContainer for \"04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c\"" Sep 13 10:16:18.729534 containerd[1560]: time="2025-09-13T10:16:18.729475921Z" level=info msg="connecting to shim 04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c" address="unix:///run/containerd/s/7adaa2ef594510e88354041b84b0f40a30423b4357de6824a8d6606c11cdafad" protocol=ttrpc version=3 Sep 13 10:16:18.751668 systemd[1]: Started cri-containerd-04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c.scope - libcontainer container 04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c. Sep 13 10:16:18.792006 containerd[1560]: time="2025-09-13T10:16:18.791961960Z" level=info msg="StartContainer for \"04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c\" returns successfully" Sep 13 10:16:18.872405 containerd[1560]: time="2025-09-13T10:16:18.872357082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c\" id:\"7304bf042844c11984ced4772cc8ffd7dac851ca4bfad933d08890fbdc495619\" pid:4866 exited_at:{seconds:1757758578 nanos:871990378}" Sep 13 10:16:19.251569 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 13 10:16:19.706740 kubelet[2783]: E0913 10:16:19.706689 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:19.722494 kubelet[2783]: I0913 10:16:19.722420 2783 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rk54k" podStartSLOduration=5.722403516 podStartE2EDuration="5.722403516s" podCreationTimestamp="2025-09-13 10:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 10:16:19.722049174 +0000 UTC m=+97.062298906" watchObservedRunningTime="2025-09-13 10:16:19.722403516 +0000 UTC m=+97.062653238" Sep 13 10:16:20.708708 kubelet[2783]: E0913 10:16:20.708662 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:20.891636 containerd[1560]: time="2025-09-13T10:16:20.891140901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c\" id:\"b0f7478bb692212a44a7b5ef880899a397a26d5fdca4737f6a229f5cf49c4729\" pid:5010 exit_status:1 exited_at:{seconds:1757758580 nanos:890589878}" Sep 13 10:16:22.585551 systemd-networkd[1476]: lxc_health: Link UP Sep 13 10:16:22.587246 systemd-networkd[1476]: lxc_health: Gained carrier Sep 13 10:16:22.627718 kubelet[2783]: E0913 10:16:22.627543 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:22.712651 kubelet[2783]: E0913 10:16:22.712603 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:23.071245 containerd[1560]: time="2025-09-13T10:16:23.071182149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c\" id:\"fb480a177d480ed5436bed0afcba414883f3198e1eba452209cb7a528fa5c35b\" pid:5395 exited_at:{seconds:1757758583 nanos:70149164}" Sep 13 10:16:24.593718 systemd-networkd[1476]: lxc_health: Gained IPv6LL Sep 13 10:16:25.173313 containerd[1560]: time="2025-09-13T10:16:25.173226013Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c\" id:\"f4e9d3305915c16913ae4b24f217454bd4c14acf7fbbaf8953451b340a097493\" pid:5434 exited_at:{seconds:1757758585 nanos:172685810}" Sep 13 10:16:25.176075 kubelet[2783]: E0913 10:16:25.175870 2783 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44006->127.0.0.1:41803: write tcp 127.0.0.1:44006->127.0.0.1:41803: write: broken pipe Sep 13 10:16:27.265814 containerd[1560]: time="2025-09-13T10:16:27.265743254Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c\" id:\"800ef24658309a8277e146eb0f64a1fefa799030b9e35cf5ccf616953620fcbd\" pid:5464 exited_at:{seconds:1757758587 nanos:265330503}" Sep 13 10:16:28.768787 kubelet[2783]: E0913 10:16:28.768716 2783 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 10:16:29.358529 containerd[1560]: time="2025-09-13T10:16:29.358447353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04934b1a0401db7516cd7c5077f8688b47b61490a13445f302812b904abd6c7c\" id:\"d1fb08a60bea0d5c694f0eb15c665a764f08063d9fe9c3f3424b0145f3260ba6\" pid:5487 exited_at:{seconds:1757758589 nanos:358126817}" Sep 13 10:16:29.369380 sshd[4600]: Connection closed by 10.0.0.1 port 59304 Sep 13 10:16:29.369902 sshd-session[4593]: pam_unix(sshd:session): session closed for user core Sep 13 10:16:29.374054 systemd[1]: sshd@29-10.0.0.19:22-10.0.0.1:59304.service: Deactivated successfully. Sep 13 10:16:29.376258 systemd[1]: session-30.scope: Deactivated successfully. Sep 13 10:16:29.377166 systemd-logind[1542]: Session 30 logged out. Waiting for processes to exit. Sep 13 10:16:29.378932 systemd-logind[1542]: Removed session 30.