Sep 12 23:08:27.981519 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 20:38:35 -00 2025 Sep 12 23:08:27.981552 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 23:08:27.981570 kernel: BIOS-provided physical RAM map: Sep 12 23:08:27.981579 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 23:08:27.981588 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 23:08:27.981596 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 23:08:27.981606 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 23:08:27.981616 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 23:08:27.981629 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 23:08:27.981640 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 23:08:27.981652 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 23:08:27.981664 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 23:08:27.981676 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 23:08:27.981688 kernel: NX (Execute Disable) protection: active Sep 12 23:08:27.981706 kernel: APIC: Static calls initialized Sep 12 23:08:27.981719 kernel: SMBIOS 2.8 present. Sep 12 23:08:27.981737 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 23:08:27.981750 kernel: DMI: Memory slots populated: 1/1 Sep 12 23:08:27.981780 kernel: Hypervisor detected: KVM Sep 12 23:08:27.981791 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 23:08:27.981802 kernel: kvm-clock: using sched offset of 9594083358 cycles Sep 12 23:08:27.981813 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 23:08:27.981824 kernel: tsc: Detected 2794.748 MHz processor Sep 12 23:08:27.981838 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 23:08:27.981848 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 23:08:27.981857 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 23:08:27.981867 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 23:08:27.981877 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 23:08:27.981886 kernel: Using GB pages for direct mapping Sep 12 23:08:27.981896 kernel: ACPI: Early table checksum verification disabled Sep 12 23:08:27.981905 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 23:08:27.981915 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:08:27.981927 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:08:27.981937 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:08:27.981947 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 23:08:27.981956 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:08:27.981967 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:08:27.981977 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:08:27.981988 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:08:27.981998 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 23:08:27.982016 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 23:08:27.982027 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 23:08:27.982037 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 23:08:27.982048 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 23:08:27.982059 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 23:08:27.982070 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 23:08:27.982083 kernel: No NUMA configuration found Sep 12 23:08:27.982093 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 23:08:27.982104 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 12 23:08:27.982115 kernel: Zone ranges: Sep 12 23:08:27.982126 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 23:08:27.982136 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 23:08:27.982147 kernel: Normal empty Sep 12 23:08:27.982157 kernel: Device empty Sep 12 23:08:27.982168 kernel: Movable zone start for each node Sep 12 23:08:27.982179 kernel: Early memory node ranges Sep 12 23:08:27.982192 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 23:08:27.982203 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 23:08:27.982214 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 23:08:27.982224 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 23:08:27.982235 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 23:08:27.982246 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 23:08:27.982256 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 23:08:27.982271 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 23:08:27.982282 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 23:08:27.982295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 23:08:27.982306 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 23:08:27.982325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 23:08:27.982336 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 23:08:27.982347 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 23:08:27.982358 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 23:08:27.982368 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 23:08:27.982379 kernel: TSC deadline timer available Sep 12 23:08:27.982395 kernel: CPU topo: Max. logical packages: 1 Sep 12 23:08:27.982409 kernel: CPU topo: Max. logical dies: 1 Sep 12 23:08:27.982419 kernel: CPU topo: Max. dies per package: 1 Sep 12 23:08:27.982430 kernel: CPU topo: Max. threads per core: 1 Sep 12 23:08:27.982441 kernel: CPU topo: Num. cores per package: 4 Sep 12 23:08:27.982451 kernel: CPU topo: Num. threads per package: 4 Sep 12 23:08:27.982462 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 23:08:27.982473 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 23:08:27.982483 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 23:08:27.982494 kernel: kvm-guest: setup PV sched yield Sep 12 23:08:27.982507 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 23:08:27.982518 kernel: Booting paravirtualized kernel on KVM Sep 12 23:08:27.982528 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 23:08:27.982539 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 23:08:27.982548 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 23:08:27.982558 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 23:08:27.982568 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 23:08:27.982578 kernel: kvm-guest: PV spinlocks enabled Sep 12 23:08:27.982588 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 23:08:27.982602 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 23:08:27.982613 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:08:27.982623 kernel: random: crng init done Sep 12 23:08:27.982633 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:08:27.982643 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:08:27.982655 kernel: Fallback order for Node 0: 0 Sep 12 23:08:27.982668 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 12 23:08:27.982682 kernel: Policy zone: DMA32 Sep 12 23:08:27.982699 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:08:27.982712 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 23:08:27.982726 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 23:08:27.982739 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 23:08:27.982753 kernel: Dynamic Preempt: voluntary Sep 12 23:08:27.982783 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:08:27.982798 kernel: rcu: RCU event tracing is enabled. Sep 12 23:08:27.982811 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 23:08:27.982825 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:08:27.982847 kernel: Rude variant of Tasks RCU enabled. Sep 12 23:08:27.982860 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:08:27.982871 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:08:27.982882 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 23:08:27.982893 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:08:27.982904 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:08:27.982915 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:08:27.982926 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 23:08:27.982937 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:08:27.982960 kernel: Console: colour VGA+ 80x25 Sep 12 23:08:27.982971 kernel: printk: legacy console [ttyS0] enabled Sep 12 23:08:27.982982 kernel: ACPI: Core revision 20240827 Sep 12 23:08:27.982996 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 23:08:27.983008 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 23:08:27.983019 kernel: x2apic enabled Sep 12 23:08:27.983030 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 23:08:27.983041 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 23:08:27.983053 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 23:08:27.983067 kernel: kvm-guest: setup PV IPIs Sep 12 23:08:27.983078 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 23:08:27.983090 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 23:08:27.983101 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 23:08:27.983112 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 23:08:27.983123 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 23:08:27.983134 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 23:08:27.983146 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 23:08:27.983160 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 23:08:27.983171 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 23:08:27.983183 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 23:08:27.983195 kernel: active return thunk: retbleed_return_thunk Sep 12 23:08:27.983206 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 23:08:27.983217 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 23:08:27.983229 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 23:08:27.983240 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 23:08:27.983255 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 23:08:27.983266 kernel: active return thunk: srso_return_thunk Sep 12 23:08:27.983277 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 23:08:27.983288 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 23:08:27.983298 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 23:08:27.983309 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 23:08:27.983520 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 23:08:27.983583 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 23:08:27.983595 kernel: Freeing SMP alternatives memory: 32K Sep 12 23:08:27.983612 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:08:27.983623 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 23:08:27.983635 kernel: landlock: Up and running. Sep 12 23:08:27.983650 kernel: SELinux: Initializing. Sep 12 23:08:27.983669 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:08:27.983684 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:08:27.983698 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 23:08:27.983712 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 23:08:27.983727 kernel: ... version: 0 Sep 12 23:08:27.983744 kernel: ... bit width: 48 Sep 12 23:08:27.983757 kernel: ... generic registers: 6 Sep 12 23:08:27.983788 kernel: ... value mask: 0000ffffffffffff Sep 12 23:08:27.983799 kernel: ... max period: 00007fffffffffff Sep 12 23:08:27.983809 kernel: ... fixed-purpose events: 0 Sep 12 23:08:27.983820 kernel: ... event mask: 000000000000003f Sep 12 23:08:27.983830 kernel: signal: max sigframe size: 1776 Sep 12 23:08:27.983841 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:08:27.983852 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:08:27.983866 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 23:08:27.983877 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:08:27.983889 kernel: smpboot: x86: Booting SMP configuration: Sep 12 23:08:27.983901 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 23:08:27.983912 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 23:08:27.983924 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 23:08:27.983937 kernel: Memory: 2428916K/2571752K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54084K init, 2880K bss, 136904K reserved, 0K cma-reserved) Sep 12 23:08:27.983948 kernel: devtmpfs: initialized Sep 12 23:08:27.983960 kernel: x86/mm: Memory block size: 128MB Sep 12 23:08:27.983975 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:08:27.983987 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 23:08:27.983999 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:08:27.984011 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:08:27.984023 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:08:27.984035 kernel: audit: type=2000 audit(1757718502.206:1): state=initialized audit_enabled=0 res=1 Sep 12 23:08:27.984046 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:08:27.984058 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 23:08:27.984069 kernel: cpuidle: using governor menu Sep 12 23:08:27.984082 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:08:27.984094 kernel: dca service started, version 1.12.1 Sep 12 23:08:27.984105 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 12 23:08:27.984116 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 23:08:27.984126 kernel: PCI: Using configuration type 1 for base access Sep 12 23:08:27.984137 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 23:08:27.984148 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:08:27.984158 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:08:27.984168 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:08:27.984196 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:08:27.984207 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:08:27.984217 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:08:27.984228 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:08:27.984238 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:08:27.984253 kernel: ACPI: Interpreter enabled Sep 12 23:08:27.984265 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 23:08:27.984277 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 23:08:27.984289 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 23:08:27.984305 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 23:08:27.984330 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 23:08:27.984342 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 23:08:27.984611 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:08:27.984801 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 23:08:27.984959 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 23:08:27.984974 kernel: PCI host bridge to bus 0000:00 Sep 12 23:08:27.985137 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 23:08:27.985292 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 23:08:27.985449 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 23:08:27.985590 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 23:08:27.985752 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 23:08:27.985914 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 23:08:27.986058 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 23:08:27.986274 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 23:08:27.986463 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 23:08:27.986625 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 12 23:08:27.986840 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 12 23:08:27.987002 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 12 23:08:27.987154 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 23:08:27.987345 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 23:08:27.987633 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 12 23:08:27.987861 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 12 23:08:27.988019 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 23:08:27.988290 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 23:08:27.988466 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 12 23:08:27.988621 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 12 23:08:27.988824 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 23:08:27.989095 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 23:08:27.989495 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 12 23:08:27.989809 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 12 23:08:27.989982 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 23:08:27.990151 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 12 23:08:27.990534 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 23:08:27.990743 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 23:08:27.990958 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 23:08:27.991123 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 12 23:08:27.991281 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 12 23:08:27.991724 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 23:08:27.992293 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 12 23:08:27.992315 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 23:08:27.992341 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 23:08:27.992353 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 23:08:27.992365 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 23:08:27.992377 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 23:08:27.992388 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 23:08:27.992400 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 23:08:27.992410 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 23:08:27.992421 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 23:08:27.992432 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 23:08:27.992446 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 23:08:27.992457 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 23:08:27.992467 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 23:08:27.992479 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 23:08:27.992490 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 23:08:27.992501 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 23:08:27.992513 kernel: iommu: Default domain type: Translated Sep 12 23:08:27.992524 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 23:08:27.992536 kernel: PCI: Using ACPI for IRQ routing Sep 12 23:08:27.992612 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 23:08:27.992633 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 23:08:27.992645 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 23:08:27.992882 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 23:08:27.993040 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 23:08:27.993500 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 23:08:27.993520 kernel: vgaarb: loaded Sep 12 23:08:27.993533 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 23:08:27.993551 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 23:08:27.993563 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 23:08:27.993702 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:08:27.993715 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:08:27.993727 kernel: pnp: PnP ACPI init Sep 12 23:08:27.994131 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 23:08:27.994153 kernel: pnp: PnP ACPI: found 6 devices Sep 12 23:08:27.994172 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 23:08:27.994188 kernel: NET: Registered PF_INET protocol family Sep 12 23:08:27.994199 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:08:27.994211 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:08:27.994223 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:08:27.994234 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:08:27.994245 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:08:27.994256 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:08:27.994267 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:08:27.994278 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:08:27.994293 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:08:27.994305 kernel: NET: Registered PF_XDP protocol family Sep 12 23:08:27.994477 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 23:08:27.994621 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 23:08:27.994812 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 23:08:27.994960 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 23:08:27.995101 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 23:08:27.995242 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 23:08:27.995262 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:08:27.995274 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 23:08:27.995285 kernel: Initialise system trusted keyrings Sep 12 23:08:27.995296 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:08:27.995308 kernel: Key type asymmetric registered Sep 12 23:08:27.995332 kernel: Asymmetric key parser 'x509' registered Sep 12 23:08:27.995343 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:08:27.995354 kernel: io scheduler mq-deadline registered Sep 12 23:08:27.995364 kernel: io scheduler kyber registered Sep 12 23:08:27.995379 kernel: io scheduler bfq registered Sep 12 23:08:27.995391 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 23:08:27.995404 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 23:08:27.995416 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 23:08:27.995427 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 23:08:27.995439 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:08:27.995450 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 23:08:27.995462 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 23:08:27.995473 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 23:08:27.995485 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 23:08:27.995655 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 23:08:27.995826 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 23:08:27.995843 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 23:08:27.996529 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T23:08:27 UTC (1757718507) Sep 12 23:08:27.996909 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 23:08:27.996929 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 23:08:27.996941 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:08:27.996958 kernel: Segment Routing with IPv6 Sep 12 23:08:27.996970 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:08:27.996982 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:08:27.996994 kernel: Key type dns_resolver registered Sep 12 23:08:27.997005 kernel: IPI shorthand broadcast: enabled Sep 12 23:08:27.997017 kernel: sched_clock: Marking stable (5412008773, 196520926)->(5776752136, -168222437) Sep 12 23:08:27.997029 kernel: registered taskstats version 1 Sep 12 23:08:27.997041 kernel: Loading compiled-in X.509 certificates Sep 12 23:08:27.997053 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: c3297a5801573420030c321362a802da1fd49c4e' Sep 12 23:08:27.997067 kernel: Demotion targets for Node 0: null Sep 12 23:08:27.997079 kernel: Key type .fscrypt registered Sep 12 23:08:27.997090 kernel: Key type fscrypt-provisioning registered Sep 12 23:08:27.997102 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:08:27.997114 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:08:27.997125 kernel: ima: No architecture policies found Sep 12 23:08:27.997228 kernel: clk: Disabling unused clocks Sep 12 23:08:27.997242 kernel: Warning: unable to open an initial console. Sep 12 23:08:27.997254 kernel: Freeing unused kernel image (initmem) memory: 54084K Sep 12 23:08:27.997271 kernel: Write protecting the kernel read-only data: 24576k Sep 12 23:08:27.997282 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 12 23:08:27.997294 kernel: Run /init as init process Sep 12 23:08:27.997305 kernel: with arguments: Sep 12 23:08:27.997325 kernel: /init Sep 12 23:08:27.997336 kernel: with environment: Sep 12 23:08:27.997347 kernel: HOME=/ Sep 12 23:08:27.997358 kernel: TERM=linux Sep 12 23:08:27.997369 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:08:27.997386 systemd[1]: Successfully made /usr/ read-only. Sep 12 23:08:27.997415 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:08:27.997430 systemd[1]: Detected virtualization kvm. Sep 12 23:08:27.997442 systemd[1]: Detected architecture x86-64. Sep 12 23:08:27.997453 systemd[1]: Running in initrd. Sep 12 23:08:27.997469 systemd[1]: No hostname configured, using default hostname. Sep 12 23:08:27.997483 systemd[1]: Hostname set to . Sep 12 23:08:27.997495 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:08:27.997508 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:08:27.997520 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:08:27.997531 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:08:27.997544 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:08:27.997556 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:08:27.997572 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:08:27.997586 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:08:27.997600 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:08:27.997613 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:08:27.997625 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:08:27.997638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:08:27.997653 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:08:27.997672 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:08:27.997688 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:08:27.997703 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:08:27.997719 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:08:27.997734 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:08:27.997750 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:08:27.997787 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 23:08:27.997803 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:08:27.997815 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:08:27.997831 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:08:27.997843 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:08:27.997854 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:08:27.997867 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:08:27.997883 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:08:27.997898 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 23:08:27.997910 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:08:27.997923 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:08:27.997935 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:08:27.997948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:08:27.997961 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:08:27.997977 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:08:27.997990 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:08:27.998043 systemd-journald[221]: Collecting audit messages is disabled. Sep 12 23:08:27.998076 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:08:27.998089 systemd-journald[221]: Journal started Sep 12 23:08:27.998116 systemd-journald[221]: Runtime Journal (/run/log/journal/30d13bac459e417195c36b6a42a19ff3) is 6M, max 48.6M, 42.5M free. Sep 12 23:08:28.005619 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:08:27.999866 systemd-modules-load[222]: Inserted module 'overlay' Sep 12 23:08:28.009435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:08:28.030793 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:08:28.087150 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:08:28.087190 kernel: Bridge firewalling registered Sep 12 23:08:28.047024 systemd-tmpfiles[237]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 23:08:28.085690 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 12 23:08:28.087889 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:08:28.096565 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:08:28.105618 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:08:28.113656 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:08:28.116114 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:08:28.140519 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:08:28.158930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:08:28.176945 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:08:28.179956 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:08:28.185781 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:08:28.191576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:08:28.252646 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 23:08:28.279933 systemd-resolved[263]: Positive Trust Anchors: Sep 12 23:08:28.279956 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:08:28.279993 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:08:28.283076 systemd-resolved[263]: Defaulting to hostname 'linux'. Sep 12 23:08:28.284722 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:08:28.286435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:08:28.515834 kernel: SCSI subsystem initialized Sep 12 23:08:28.529987 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:08:28.550995 kernel: iscsi: registered transport (tcp) Sep 12 23:08:28.593203 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:08:28.593291 kernel: QLogic iSCSI HBA Driver Sep 12 23:08:28.651446 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:08:28.688404 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:08:28.698194 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:08:28.902869 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:08:28.909657 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:08:29.012852 kernel: raid6: avx2x4 gen() 22028 MB/s Sep 12 23:08:29.029841 kernel: raid6: avx2x2 gen() 22825 MB/s Sep 12 23:08:29.047338 kernel: raid6: avx2x1 gen() 17332 MB/s Sep 12 23:08:29.047427 kernel: raid6: using algorithm avx2x2 gen() 22825 MB/s Sep 12 23:08:29.065201 kernel: raid6: .... xor() 14660 MB/s, rmw enabled Sep 12 23:08:29.065324 kernel: raid6: using avx2x2 recovery algorithm Sep 12 23:08:29.095087 kernel: xor: automatically using best checksumming function avx Sep 12 23:08:29.315820 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:08:29.326195 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:08:29.330925 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:08:29.378875 systemd-udevd[474]: Using default interface naming scheme 'v255'. Sep 12 23:08:29.386082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:08:29.391638 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:08:29.422742 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Sep 12 23:08:29.463202 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:08:29.467184 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:08:29.575971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:08:29.582958 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:08:29.620791 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 23:08:29.625950 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 23:08:29.634956 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:08:29.634992 kernel: GPT:9289727 != 19775487 Sep 12 23:08:29.635032 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:08:29.635069 kernel: GPT:9289727 != 19775487 Sep 12 23:08:29.635108 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:08:29.635124 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:08:29.645800 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 23:08:29.649887 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 23:08:29.655797 kernel: AES CTR mode by8 optimization enabled Sep 12 23:08:29.675803 kernel: libata version 3.00 loaded. Sep 12 23:08:29.677897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:08:29.678030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:08:29.683265 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:08:29.690194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:08:29.693532 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 23:08:29.697308 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 23:08:29.697596 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 23:08:29.703825 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 23:08:29.704076 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 23:08:29.704237 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 23:08:29.709124 kernel: scsi host0: ahci Sep 12 23:08:29.709847 kernel: scsi host1: ahci Sep 12 23:08:29.710091 kernel: scsi host2: ahci Sep 12 23:08:29.712670 kernel: scsi host3: ahci Sep 12 23:08:29.721173 kernel: scsi host4: ahci Sep 12 23:08:29.722793 kernel: scsi host5: ahci Sep 12 23:08:29.730932 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 12 23:08:29.730985 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 12 23:08:29.730997 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 12 23:08:29.731015 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 12 23:08:29.731026 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 12 23:08:29.732328 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 12 23:08:29.736860 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 23:08:29.772350 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:08:29.798010 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 23:08:29.822960 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:08:29.832886 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 23:08:29.836179 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 23:08:29.839745 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:08:30.039273 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 23:08:30.039354 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 23:08:30.039367 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 23:08:30.040825 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 23:08:30.041805 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 23:08:30.042807 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 23:08:30.042834 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 23:08:30.043344 kernel: ata3.00: applying bridge limits Sep 12 23:08:30.044814 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 23:08:30.044842 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 23:08:30.045437 kernel: ata3.00: configured for UDMA/100 Sep 12 23:08:30.046811 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 23:08:30.164844 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 23:08:30.165240 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 23:08:30.178815 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 23:08:30.539815 disk-uuid[635]: Primary Header is updated. Sep 12 23:08:30.539815 disk-uuid[635]: Secondary Entries is updated. Sep 12 23:08:30.539815 disk-uuid[635]: Secondary Header is updated. Sep 12 23:08:30.544801 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:08:30.550799 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:08:30.654149 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:08:30.690084 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:08:30.690690 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:08:30.694331 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:08:30.696294 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:08:30.726826 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:08:31.552847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:08:31.553473 disk-uuid[640]: The operation has completed successfully. Sep 12 23:08:31.596307 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:08:31.596464 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:08:31.637184 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:08:31.670570 sh[664]: Success Sep 12 23:08:31.697488 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:08:31.697553 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:08:31.697568 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 23:08:31.712816 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 23:08:31.755803 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:08:31.761177 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:08:31.784210 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:08:31.793812 kernel: BTRFS: device fsid 5d2ab445-1154-4e47-9d7e-ff4b81d84474 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (676) Sep 12 23:08:31.793859 kernel: BTRFS info (device dm-0): first mount of filesystem 5d2ab445-1154-4e47-9d7e-ff4b81d84474 Sep 12 23:08:31.795921 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:08:31.801805 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:08:31.801881 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 23:08:31.803317 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:08:31.805918 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:08:31.808353 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:08:31.811382 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:08:31.814287 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:08:31.864378 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 12 23:08:31.868123 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:08:31.868201 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:08:31.878797 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:08:31.878872 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:08:31.886133 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:08:31.891804 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:08:31.896737 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:08:32.022177 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:08:32.027202 ignition[760]: Ignition 2.22.0 Sep 12 23:08:32.027236 ignition[760]: Stage: fetch-offline Sep 12 23:08:32.029119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:08:32.027278 ignition[760]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:08:32.027294 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:08:32.027435 ignition[760]: parsed url from cmdline: "" Sep 12 23:08:32.027439 ignition[760]: no config URL provided Sep 12 23:08:32.027445 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:08:32.027456 ignition[760]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:08:32.027490 ignition[760]: op(1): [started] loading QEMU firmware config module Sep 12 23:08:32.027497 ignition[760]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 23:08:32.040574 ignition[760]: op(1): [finished] loading QEMU firmware config module Sep 12 23:08:32.083498 ignition[760]: parsing config with SHA512: a91a89eea88845ad1b86bf4df5a4e6d7fc395bd11f2cbd00c00478876fdc227601c9070d979f232f79db46f61fd325c7c00e810c7d343df160a269229673005b Sep 12 23:08:32.087181 unknown[760]: fetched base config from "system" Sep 12 23:08:32.087196 unknown[760]: fetched user config from "qemu" Sep 12 23:08:32.087631 ignition[760]: fetch-offline: fetch-offline passed Sep 12 23:08:32.091371 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:08:32.087698 ignition[760]: Ignition finished successfully Sep 12 23:08:32.115600 systemd-networkd[853]: lo: Link UP Sep 12 23:08:32.115612 systemd-networkd[853]: lo: Gained carrier Sep 12 23:08:32.117473 systemd-networkd[853]: Enumeration completed Sep 12 23:08:32.117732 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:08:32.117940 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:08:32.117946 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:08:32.120286 systemd-networkd[853]: eth0: Link UP Sep 12 23:08:32.120493 systemd-networkd[853]: eth0: Gained carrier Sep 12 23:08:32.120504 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:08:32.120645 systemd[1]: Reached target network.target - Network. Sep 12 23:08:32.122451 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 23:08:32.123618 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:08:32.160914 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:08:32.184405 ignition[857]: Ignition 2.22.0 Sep 12 23:08:32.184428 ignition[857]: Stage: kargs Sep 12 23:08:32.184607 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:08:32.184621 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:08:32.185662 ignition[857]: kargs: kargs passed Sep 12 23:08:32.185727 ignition[857]: Ignition finished successfully Sep 12 23:08:32.194535 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:08:32.199795 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:08:32.246554 ignition[867]: Ignition 2.22.0 Sep 12 23:08:32.246577 ignition[867]: Stage: disks Sep 12 23:08:32.246735 ignition[867]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:08:32.246747 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:08:32.251148 ignition[867]: disks: disks passed Sep 12 23:08:32.251954 ignition[867]: Ignition finished successfully Sep 12 23:08:32.256939 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:08:32.259270 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:08:32.259637 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:08:32.262271 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:08:32.262645 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:08:32.263280 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:08:32.270663 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:08:32.299991 systemd-resolved[263]: Detected conflict on linux IN A 10.0.0.144 Sep 12 23:08:32.300023 systemd-resolved[263]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Sep 12 23:08:32.303473 systemd-fsck[877]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 23:08:32.315449 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:08:32.317927 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:08:32.437819 kernel: EXT4-fs (vda9): mounted filesystem d027afc5-396a-49bf-a5be-60ddd42cb089 r/w with ordered data mode. Quota mode: none. Sep 12 23:08:32.439045 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:08:32.440475 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:08:32.442882 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:08:32.445036 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:08:32.447173 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:08:32.447258 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:08:32.447295 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:08:32.462680 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:08:32.464666 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:08:32.469806 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Sep 12 23:08:32.472293 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:08:32.472413 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:08:32.475955 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:08:32.475988 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:08:32.478730 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:08:32.513777 initrd-setup-root[910]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:08:32.520114 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:08:32.526735 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:08:32.533062 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:08:32.684723 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:08:32.687135 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:08:32.695341 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:08:32.714333 kernel: BTRFS info (device vda6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:08:32.759300 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:08:32.795029 ignition[999]: INFO : Ignition 2.22.0 Sep 12 23:08:32.795029 ignition[999]: INFO : Stage: mount Sep 12 23:08:32.800130 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:08:32.800130 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:08:32.800130 ignition[999]: INFO : mount: mount passed Sep 12 23:08:32.800130 ignition[999]: INFO : Ignition finished successfully Sep 12 23:08:32.795543 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:08:32.808716 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:08:32.818887 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:08:32.859364 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:08:32.889269 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1012) Sep 12 23:08:32.889364 kernel: BTRFS info (device vda6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 23:08:32.890876 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 23:08:32.898267 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:08:32.898358 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:08:32.907588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:08:32.968124 ignition[1029]: INFO : Ignition 2.22.0 Sep 12 23:08:32.968124 ignition[1029]: INFO : Stage: files Sep 12 23:08:32.970328 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:08:32.970328 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:08:32.970328 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:08:32.979360 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:08:32.979360 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:08:32.987839 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:08:32.987839 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:08:32.992878 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:08:32.987994 unknown[1029]: wrote ssh authorized keys file for user: core Sep 12 23:08:32.997222 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 23:08:32.997222 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 23:08:33.160540 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:08:33.467372 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 23:08:33.467372 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 23:08:33.476447 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 23:08:33.576087 systemd-networkd[853]: eth0: Gained IPv6LL Sep 12 23:08:33.720953 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 23:08:33.923788 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 23:08:33.923788 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:08:33.943362 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:08:33.943362 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:08:33.943362 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:08:33.943362 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:08:33.943362 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:08:33.943362 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:08:33.943362 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:08:33.970979 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:08:33.970979 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:08:33.970979 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 23:08:33.970979 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 23:08:33.970979 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 23:08:33.970979 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 23:08:34.369473 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 23:08:35.879780 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 23:08:35.879780 ignition[1029]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 23:08:35.883905 ignition[1029]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:08:36.078448 ignition[1029]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:08:36.078448 ignition[1029]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 23:08:36.078448 ignition[1029]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 23:08:36.078448 ignition[1029]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:08:36.087008 ignition[1029]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:08:36.087008 ignition[1029]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 23:08:36.087008 ignition[1029]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 23:08:36.122506 ignition[1029]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:08:36.130697 ignition[1029]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:08:36.130697 ignition[1029]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 23:08:36.130697 ignition[1029]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:08:36.130697 ignition[1029]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:08:36.138230 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:08:36.138230 ignition[1029]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:08:36.138230 ignition[1029]: INFO : files: files passed Sep 12 23:08:36.138230 ignition[1029]: INFO : Ignition finished successfully Sep 12 23:08:36.138085 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:08:36.140517 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:08:36.145872 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:08:36.158594 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:08:36.158751 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:08:36.163441 initrd-setup-root-after-ignition[1058]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 23:08:36.168478 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:08:36.168478 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:08:36.172186 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:08:36.175619 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:08:36.178708 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:08:36.182269 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:08:36.249459 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:08:36.250957 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:08:36.254043 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:08:36.256295 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:08:36.258672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:08:36.261564 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:08:36.303972 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:08:36.309300 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:08:36.340206 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:08:36.341650 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:08:36.342198 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:08:36.342617 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:08:36.342794 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:08:36.350310 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:08:36.352698 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:08:36.354944 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:08:36.358210 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:08:36.358556 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:08:36.359443 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:08:36.359829 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:08:36.360476 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:08:36.361032 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:08:36.363350 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:08:36.374893 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:08:36.379613 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:08:36.379867 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:08:36.380967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:08:36.383703 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:08:36.384477 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:08:36.384652 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:08:36.385306 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:08:36.385464 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:08:36.390509 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:08:36.390673 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:08:36.393806 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:08:36.396592 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:08:36.399729 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:08:36.404595 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:08:36.404953 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:08:36.409082 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:08:36.409250 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:08:36.411284 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:08:36.411407 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:08:36.412007 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:08:36.412173 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:08:36.414546 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:08:36.414660 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:08:36.420100 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:08:36.420587 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:08:36.420803 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:08:36.423649 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:08:36.427017 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:08:36.427243 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:08:36.428856 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:08:36.428982 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:08:36.439067 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:08:36.439221 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:08:36.464379 ignition[1084]: INFO : Ignition 2.22.0 Sep 12 23:08:36.464379 ignition[1084]: INFO : Stage: umount Sep 12 23:08:36.466974 ignition[1084]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:08:36.466974 ignition[1084]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:08:36.466974 ignition[1084]: INFO : umount: umount passed Sep 12 23:08:36.466974 ignition[1084]: INFO : Ignition finished successfully Sep 12 23:08:36.464508 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:08:36.471585 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:08:36.471846 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:08:36.472680 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:08:36.472897 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:08:36.475880 systemd[1]: Stopped target network.target - Network. Sep 12 23:08:36.479143 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:08:36.479219 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:08:36.481373 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:08:36.481440 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:08:36.483442 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:08:36.484145 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:08:36.485112 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:08:36.485171 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:08:36.486248 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:08:36.486301 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:08:36.486802 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:08:36.491262 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:08:36.503999 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:08:36.504275 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:08:36.509263 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 23:08:36.509648 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:08:36.509718 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:08:36.517800 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 23:08:36.519318 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:08:36.519614 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:08:36.524196 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 23:08:36.524397 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 23:08:36.528469 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:08:36.528525 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:08:36.532865 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:08:36.533339 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:08:36.533403 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:08:36.534118 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:08:36.534176 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:08:36.539405 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:08:36.539466 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:08:36.540115 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:08:36.542128 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 23:08:36.559599 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:08:36.561261 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:08:36.562760 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:08:36.563055 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:08:36.565422 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:08:36.565521 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:08:36.566353 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:08:36.566399 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:08:36.568690 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:08:36.568753 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:08:36.572824 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:08:36.572880 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:08:36.575061 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:08:36.575144 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:08:36.581475 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:08:36.582887 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 23:08:36.582948 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:08:36.588168 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:08:36.588225 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:08:36.593731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:08:36.593819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:08:36.610568 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:08:36.610739 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:08:36.611611 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:08:36.619476 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:08:36.659820 systemd[1]: Switching root. Sep 12 23:08:36.707483 systemd-journald[221]: Journal stopped Sep 12 23:08:38.368929 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Sep 12 23:08:38.369001 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:08:38.369018 kernel: SELinux: policy capability open_perms=1 Sep 12 23:08:38.370060 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:08:38.370113 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:08:38.370132 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:08:38.370151 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:08:38.370169 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:08:38.370187 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:08:38.370205 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 23:08:38.370223 kernel: audit: type=1403 audit(1757718517.221:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:08:38.370254 systemd[1]: Successfully loaded SELinux policy in 76.855ms. Sep 12 23:08:38.370308 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.629ms. Sep 12 23:08:38.370333 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:08:38.370362 systemd[1]: Detected virtualization kvm. Sep 12 23:08:38.370381 systemd[1]: Detected architecture x86-64. Sep 12 23:08:38.370398 systemd[1]: Detected first boot. Sep 12 23:08:38.370418 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:08:38.370463 zram_generator::config[1129]: No configuration found. Sep 12 23:08:38.370484 kernel: Guest personality initialized and is inactive Sep 12 23:08:38.370512 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 23:08:38.370531 kernel: Initialized host personality Sep 12 23:08:38.370549 kernel: NET: Registered PF_VSOCK protocol family Sep 12 23:08:38.370568 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:08:38.370591 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 23:08:38.370633 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:08:38.370653 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:08:38.370673 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:08:38.370692 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:08:38.370721 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:08:38.370741 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:08:38.370781 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:08:38.370804 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:08:38.370829 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:08:38.370850 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:08:38.370884 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:08:38.370923 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:08:38.370968 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:08:38.370999 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:08:38.371050 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:08:38.371101 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:08:38.371141 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:08:38.371186 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 23:08:38.371235 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:08:38.371279 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:08:38.371344 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:08:38.371397 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:08:38.371442 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:08:38.371468 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:08:38.371488 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:08:38.371508 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:08:38.371526 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:08:38.371545 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:08:38.371564 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:08:38.371595 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:08:38.371615 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 23:08:38.371633 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:08:38.371652 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:08:38.371671 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:08:38.371690 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:08:38.371709 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:08:38.371738 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:08:38.371759 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:08:38.371810 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:08:38.371831 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:08:38.371850 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:08:38.371869 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:08:38.371892 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:08:38.371911 systemd[1]: Reached target machines.target - Containers. Sep 12 23:08:38.371929 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:08:38.371948 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:08:38.371977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:08:38.371997 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:08:38.372016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:08:38.372049 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:08:38.372070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:08:38.372089 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:08:38.372108 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:08:38.372128 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:08:38.372147 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:08:38.372175 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:08:38.372195 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:08:38.372215 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:08:38.372234 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:08:38.372253 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:08:38.372271 kernel: fuse: init (API version 7.41) Sep 12 23:08:38.372290 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:08:38.372309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:08:38.372332 kernel: loop: module loaded Sep 12 23:08:38.372364 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:08:38.372384 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 23:08:38.372403 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:08:38.372421 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:08:38.372440 systemd[1]: Stopped verity-setup.service. Sep 12 23:08:38.372470 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:08:38.372490 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:08:38.372509 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:08:38.372527 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:08:38.372547 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:08:38.372565 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:08:38.372665 systemd-journald[1200]: Collecting audit messages is disabled. Sep 12 23:08:38.372713 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:08:38.372735 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:08:38.372754 kernel: ACPI: bus type drm_connector registered Sep 12 23:08:38.372847 systemd-journald[1200]: Journal started Sep 12 23:08:38.372896 systemd-journald[1200]: Runtime Journal (/run/log/journal/30d13bac459e417195c36b6a42a19ff3) is 6M, max 48.6M, 42.5M free. Sep 12 23:08:38.047567 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:08:38.063010 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 23:08:38.373812 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:08:38.063707 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:08:38.378790 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:08:38.380236 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:08:38.380582 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:08:38.382325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:08:38.382575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:08:38.384251 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:08:38.384489 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:08:38.386061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:08:38.386391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:08:38.388205 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:08:38.388453 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:08:38.390016 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:08:38.390270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:08:38.391985 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:08:38.393664 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:08:38.395536 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:08:38.413616 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:08:38.417103 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:08:38.423865 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:08:38.425261 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:08:38.425298 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:08:38.428693 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 23:08:38.520260 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:08:38.521665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:08:38.524285 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:08:38.527500 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:08:38.529404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:08:38.531626 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:08:38.532562 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:08:38.535926 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:08:38.540025 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:08:38.544083 systemd-journald[1200]: Time spent on flushing to /var/log/journal/30d13bac459e417195c36b6a42a19ff3 is 21.687ms for 984 entries. Sep 12 23:08:38.544083 systemd-journald[1200]: System Journal (/var/log/journal/30d13bac459e417195c36b6a42a19ff3) is 8M, max 195.6M, 187.6M free. Sep 12 23:08:38.577354 systemd-journald[1200]: Received client request to flush runtime journal. Sep 12 23:08:38.577396 kernel: loop0: detected capacity change from 0 to 110984 Sep 12 23:08:38.544976 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:08:38.551125 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 23:08:38.554545 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:08:38.557269 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:08:38.559002 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:08:38.563834 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:08:38.572761 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:08:38.580185 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 23:08:38.585539 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:08:38.595040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:08:38.619016 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:08:38.629207 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 23:08:38.644317 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:08:38.648829 kernel: loop1: detected capacity change from 0 to 128016 Sep 12 23:08:38.649302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:08:38.687676 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 12 23:08:38.687701 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 12 23:08:38.692855 kernel: loop2: detected capacity change from 0 to 224512 Sep 12 23:08:38.693374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:08:38.729813 kernel: loop3: detected capacity change from 0 to 110984 Sep 12 23:08:38.759828 kernel: loop4: detected capacity change from 0 to 128016 Sep 12 23:08:38.786801 kernel: loop5: detected capacity change from 0 to 224512 Sep 12 23:08:38.816586 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 23:08:38.817410 (sd-merge)[1270]: Merged extensions into '/usr'. Sep 12 23:08:38.824384 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:08:38.824408 systemd[1]: Reloading... Sep 12 23:08:38.955937 zram_generator::config[1292]: No configuration found. Sep 12 23:08:39.268254 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:08:39.268604 systemd[1]: Reloading finished in 443 ms. Sep 12 23:08:39.329260 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:08:39.331511 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:08:39.351135 systemd[1]: Starting ensure-sysext.service... Sep 12 23:08:39.353534 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:08:39.372593 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:08:39.379858 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:08:39.379873 systemd[1]: Reloading... Sep 12 23:08:39.386743 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 23:08:39.387722 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 23:08:39.388309 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:08:39.388942 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:08:39.390355 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:08:39.390887 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Sep 12 23:08:39.391072 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Sep 12 23:08:39.397781 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:08:39.397974 systemd-tmpfiles[1333]: Skipping /boot Sep 12 23:08:39.411659 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:08:39.411760 systemd-tmpfiles[1333]: Skipping /boot Sep 12 23:08:39.447803 zram_generator::config[1359]: No configuration found. Sep 12 23:08:39.720554 systemd[1]: Reloading finished in 340 ms. Sep 12 23:08:39.761578 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:08:39.764413 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:08:39.779153 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:08:39.783457 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:08:39.796056 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:08:39.801632 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:08:39.808136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:08:39.812268 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:08:39.816834 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:08:39.817264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:08:39.824460 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:08:39.830856 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:08:39.834864 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:08:39.836493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:08:39.836629 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:08:39.836753 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:08:39.839332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:08:39.839785 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:08:39.843459 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:08:39.844308 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:08:39.846980 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:08:39.847441 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:08:39.853752 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:08:39.868231 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:08:39.872259 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:08:39.872525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:08:39.874587 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:08:39.878956 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Sep 12 23:08:39.893549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:08:39.902509 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:08:39.905588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:08:39.905857 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:08:39.906166 augenrules[1435]: No rules Sep 12 23:08:39.916639 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:08:39.921044 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:08:39.923325 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:08:39.925187 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:08:39.925654 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:08:39.928494 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:08:39.928853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:08:39.930641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:08:39.931168 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:08:39.933422 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:08:39.933726 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:08:39.935626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:08:39.943573 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:08:39.964152 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:08:39.968193 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:08:39.971913 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:08:39.976166 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:08:39.981906 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:08:39.987919 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:08:39.991051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:08:39.993056 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:08:39.993114 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:08:39.996360 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:08:39.998903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 23:08:40.068523 systemd[1]: Finished ensure-sysext.service. Sep 12 23:08:40.072775 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:08:40.076730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:08:40.077017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:08:40.080271 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:08:40.080542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:08:40.090528 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:08:40.090855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:08:40.100003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:08:40.100102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:08:40.105690 augenrules[1473]: /sbin/augenrules: No change Sep 12 23:08:40.115918 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 23:08:40.117549 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:08:40.118143 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:08:40.118428 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:08:40.133004 augenrules[1506]: No rules Sep 12 23:08:40.137316 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:08:40.137660 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:08:40.161958 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:08:40.168534 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 23:08:40.195557 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:08:40.200829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:08:40.391806 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 23:08:40.397451 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 23:08:40.396546 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:08:40.405044 kernel: ACPI: button: Power Button [PWRF] Sep 12 23:08:40.445808 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 23:08:40.446242 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 23:08:40.582332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:08:40.600924 kernel: kvm_amd: TSC scaling supported Sep 12 23:08:40.601002 kernel: kvm_amd: Nested Virtualization enabled Sep 12 23:08:40.601019 kernel: kvm_amd: Nested Paging enabled Sep 12 23:08:40.601033 kernel: kvm_amd: LBR virtualization supported Sep 12 23:08:40.601073 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 23:08:40.602319 kernel: kvm_amd: Virtual GIF supported Sep 12 23:08:40.637885 systemd-networkd[1480]: lo: Link UP Sep 12 23:08:40.637903 systemd-networkd[1480]: lo: Gained carrier Sep 12 23:08:40.640189 systemd-networkd[1480]: Enumeration completed Sep 12 23:08:40.640358 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:08:40.641636 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:08:40.641653 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:08:40.643511 systemd-networkd[1480]: eth0: Link UP Sep 12 23:08:40.643791 systemd-networkd[1480]: eth0: Gained carrier Sep 12 23:08:40.643828 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:08:40.647718 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 23:08:40.654029 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:08:40.687868 systemd-networkd[1480]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:08:40.720918 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 23:08:40.795806 kernel: EDAC MC: Ver: 3.0.0 Sep 12 23:08:40.813795 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 23:08:40.815903 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:08:40.821399 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 23:08:40.823118 systemd-timesyncd[1501]: Initial clock synchronization to Fri 2025-09-12 23:08:41.118647 UTC. Sep 12 23:08:40.834650 systemd-resolved[1403]: Positive Trust Anchors: Sep 12 23:08:40.834677 systemd-resolved[1403]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:08:40.834720 systemd-resolved[1403]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:08:40.840595 systemd-resolved[1403]: Defaulting to hostname 'linux'. Sep 12 23:08:40.843892 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:08:40.845519 systemd[1]: Reached target network.target - Network. Sep 12 23:08:40.846880 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:08:40.918887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:08:40.920864 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:08:40.922401 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:08:40.923950 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:08:40.926283 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 23:08:40.927959 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:08:40.929401 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:08:40.930825 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:08:40.932427 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:08:40.932470 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:08:40.933514 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:08:40.936481 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:08:40.940343 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:08:40.944487 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 23:08:40.946711 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 23:08:40.948090 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 23:08:40.954482 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:08:40.956629 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 23:08:40.959129 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:08:40.961410 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:08:40.962675 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:08:40.964080 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:08:40.964125 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:08:40.966206 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:08:40.970674 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:08:40.974498 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:08:40.979697 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:08:40.984524 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:08:40.985916 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:08:40.987632 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 23:08:40.991476 jq[1563]: false Sep 12 23:08:40.991928 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:08:40.996861 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:08:41.000201 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:08:41.003033 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:08:41.009934 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:08:41.009081 oslogin_cache_refresh[1565]: Refreshing passwd entry cache Sep 12 23:08:41.011458 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing passwd entry cache Sep 12 23:08:41.012360 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:08:41.014544 extend-filesystems[1564]: Found /dev/vda6 Sep 12 23:08:41.016974 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:08:41.019309 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting users, quitting Sep 12 23:08:41.019309 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 23:08:41.019309 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Refreshing group entry cache Sep 12 23:08:41.018585 oslogin_cache_refresh[1565]: Failure getting users, quitting Sep 12 23:08:41.018613 oslogin_cache_refresh[1565]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 23:08:41.018686 oslogin_cache_refresh[1565]: Refreshing group entry cache Sep 12 23:08:41.019962 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:08:41.023007 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:08:41.023578 extend-filesystems[1564]: Found /dev/vda9 Sep 12 23:08:41.028548 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Failure getting groups, quitting Sep 12 23:08:41.028548 google_oslogin_nss_cache[1565]: oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 23:08:41.028513 oslogin_cache_refresh[1565]: Failure getting groups, quitting Sep 12 23:08:41.028531 oslogin_cache_refresh[1565]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 23:08:41.030872 extend-filesystems[1564]: Checking size of /dev/vda9 Sep 12 23:08:41.031720 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:08:41.034232 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:08:41.034568 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:08:41.041608 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 23:08:41.042193 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 23:08:41.044268 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:08:41.044596 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:08:41.051680 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:08:41.052155 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:08:41.063714 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:08:41.072520 update_engine[1576]: I20250912 23:08:41.072036 1576 main.cc:92] Flatcar Update Engine starting Sep 12 23:08:41.074239 jq[1577]: true Sep 12 23:08:41.075758 extend-filesystems[1564]: Resized partition /dev/vda9 Sep 12 23:08:41.094245 extend-filesystems[1603]: resize2fs 1.47.3 (8-Jul-2025) Sep 12 23:08:41.104961 jq[1601]: true Sep 12 23:08:41.110416 tar[1587]: linux-amd64/LICENSE Sep 12 23:08:41.112392 tar[1587]: linux-amd64/helm Sep 12 23:08:41.136930 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 23:08:41.228335 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:08:41.277837 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 23:08:41.308433 dbus-daemon[1561]: [system] SELinux support is enabled Sep 12 23:08:41.311358 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:08:41.411734 update_engine[1576]: I20250912 23:08:41.317618 1576 update_check_scheduler.cc:74] Next update check in 9m50s Sep 12 23:08:41.332381 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:08:41.411957 extend-filesystems[1603]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 23:08:41.411957 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:08:41.411957 extend-filesystems[1603]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 23:08:41.332416 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:08:41.446651 extend-filesystems[1564]: Resized filesystem in /dev/vda9 Sep 12 23:08:41.457083 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:08:41.334307 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:08:41.334329 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:08:41.337206 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:08:41.344326 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:08:41.389887 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:08:41.398827 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:08:41.409292 systemd-logind[1573]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 23:08:41.409326 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 23:08:41.409965 systemd-logind[1573]: New seat seat0. Sep 12 23:08:41.412850 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:08:41.413222 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:08:41.435066 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:08:41.457534 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:08:41.544764 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 23:08:41.551399 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:08:41.552038 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:08:41.564230 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:08:41.599095 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:08:41.604385 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:08:41.609727 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 23:08:41.611381 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:08:41.640221 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:08:42.224041 containerd[1590]: time="2025-09-12T23:08:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 23:08:42.226646 containerd[1590]: time="2025-09-12T23:08:42.226572806Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 23:08:42.393768 containerd[1590]: time="2025-09-12T23:08:42.393093105Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.21µs" Sep 12 23:08:42.393768 containerd[1590]: time="2025-09-12T23:08:42.393164059Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 23:08:42.393768 containerd[1590]: time="2025-09-12T23:08:42.393205396Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 23:08:42.393768 containerd[1590]: time="2025-09-12T23:08:42.393539147Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 23:08:42.394133 containerd[1590]: time="2025-09-12T23:08:42.394081508Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 23:08:42.394182 containerd[1590]: time="2025-09-12T23:08:42.394159361Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:08:42.397726 containerd[1590]: time="2025-09-12T23:08:42.394292188Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:08:42.397726 containerd[1590]: time="2025-09-12T23:08:42.396691417Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:08:42.400017 containerd[1590]: time="2025-09-12T23:08:42.398674945Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:08:42.400017 containerd[1590]: time="2025-09-12T23:08:42.399262451Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:08:42.401208 containerd[1590]: time="2025-09-12T23:08:42.400841098Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:08:42.403102 containerd[1590]: time="2025-09-12T23:08:42.401857101Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 23:08:42.403530 containerd[1590]: time="2025-09-12T23:08:42.403481908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 23:08:42.403979 containerd[1590]: time="2025-09-12T23:08:42.403868436Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:08:42.403979 containerd[1590]: time="2025-09-12T23:08:42.403935459Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:08:42.403979 containerd[1590]: time="2025-09-12T23:08:42.403949103Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 23:08:42.404263 containerd[1590]: time="2025-09-12T23:08:42.403997072Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 23:08:42.404400 containerd[1590]: time="2025-09-12T23:08:42.404328008Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 23:08:42.404448 containerd[1590]: time="2025-09-12T23:08:42.404417448Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.423672203Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.423832576Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.423867861Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.423895813Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.423920009Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.423947640Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.423969323Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.423991439Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.424011994Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.424027036Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.424043132Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 23:08:42.424092 containerd[1590]: time="2025-09-12T23:08:42.424070484Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424338017Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424375185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424445218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424474928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424493166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424508983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424523838Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424554614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424574682Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424589124Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424620023Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424826047Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424857351Z" level=info msg="Start snapshots syncer" Sep 12 23:08:42.425197 containerd[1590]: time="2025-09-12T23:08:42.424894602Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 23:08:42.425648 containerd[1590]: time="2025-09-12T23:08:42.425290295Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 23:08:42.425648 containerd[1590]: time="2025-09-12T23:08:42.425398491Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425523723Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425721493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425749475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425763865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425776588Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425814543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425833226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425850087Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425882942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425900227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 23:08:42.425911 containerd[1590]: time="2025-09-12T23:08:42.425914193Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.425963651Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.425985809Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.425998989Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.426010760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.426021881Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.426033932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.426050671Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.426073946Z" level=info msg="runtime interface created" Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.426081415Z" level=info msg="created NRI interface" Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.426093508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.426106976Z" level=info msg="Connect containerd service" Sep 12 23:08:42.426417 containerd[1590]: time="2025-09-12T23:08:42.426135734Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:08:42.428910 containerd[1590]: time="2025-09-12T23:08:42.428519023Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:08:42.472146 systemd-networkd[1480]: eth0: Gained IPv6LL Sep 12 23:08:42.476866 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:08:42.479616 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:08:42.484342 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 23:08:42.492005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:08:42.496096 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:08:42.614938 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:08:42.619007 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 23:08:42.619388 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 23:08:42.622732 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:08:42.636040 tar[1587]: linux-amd64/README.md Sep 12 23:08:42.706869 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:08:42.712259 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:08:42.717603 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:53696.service - OpenSSH per-connection server daemon (10.0.0.1:53696). Sep 12 23:08:42.755797 containerd[1590]: time="2025-09-12T23:08:42.755700730Z" level=info msg="Start subscribing containerd event" Sep 12 23:08:42.756164 containerd[1590]: time="2025-09-12T23:08:42.756073211Z" level=info msg="Start recovering state" Sep 12 23:08:42.756354 containerd[1590]: time="2025-09-12T23:08:42.755737144Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:08:42.756457 containerd[1590]: time="2025-09-12T23:08:42.756414906Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:08:42.756559 containerd[1590]: time="2025-09-12T23:08:42.756537087Z" level=info msg="Start event monitor" Sep 12 23:08:42.756659 containerd[1590]: time="2025-09-12T23:08:42.756637616Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:08:42.756741 containerd[1590]: time="2025-09-12T23:08:42.756724356Z" level=info msg="Start streaming server" Sep 12 23:08:42.756974 containerd[1590]: time="2025-09-12T23:08:42.756953315Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 23:08:42.757060 containerd[1590]: time="2025-09-12T23:08:42.757042093Z" level=info msg="runtime interface starting up..." Sep 12 23:08:42.757126 containerd[1590]: time="2025-09-12T23:08:42.757109551Z" level=info msg="starting plugins..." Sep 12 23:08:42.757211 containerd[1590]: time="2025-09-12T23:08:42.757193425Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 23:08:42.757651 containerd[1590]: time="2025-09-12T23:08:42.757508245Z" level=info msg="containerd successfully booted in 0.547869s" Sep 12 23:08:42.757759 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:08:42.857185 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 53696 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:08:42.861479 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:42.878762 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:08:42.883346 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:08:42.965425 systemd-logind[1573]: New session 1 of user core. Sep 12 23:08:43.012915 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:08:43.032508 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:08:43.064005 (systemd)[1696]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:08:43.083234 systemd-logind[1573]: New session c1 of user core. Sep 12 23:08:43.416004 systemd[1696]: Queued start job for default target default.target. Sep 12 23:08:43.433904 systemd[1696]: Created slice app.slice - User Application Slice. Sep 12 23:08:43.433947 systemd[1696]: Reached target paths.target - Paths. Sep 12 23:08:43.434010 systemd[1696]: Reached target timers.target - Timers. Sep 12 23:08:43.440230 systemd[1696]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:08:43.467942 systemd[1696]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:08:43.468162 systemd[1696]: Reached target sockets.target - Sockets. Sep 12 23:08:43.468258 systemd[1696]: Reached target basic.target - Basic System. Sep 12 23:08:43.468323 systemd[1696]: Reached target default.target - Main User Target. Sep 12 23:08:43.468376 systemd[1696]: Startup finished in 362ms. Sep 12 23:08:43.468834 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:08:43.484255 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:08:43.556821 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:53704.service - OpenSSH per-connection server daemon (10.0.0.1:53704). Sep 12 23:08:43.801099 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 53704 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:08:43.804380 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:43.811828 systemd-logind[1573]: New session 2 of user core. Sep 12 23:08:43.839217 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:08:43.912096 sshd[1710]: Connection closed by 10.0.0.1 port 53704 Sep 12 23:08:43.914728 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:43.927259 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:53704.service: Deactivated successfully. Sep 12 23:08:43.931179 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:08:43.942760 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:08:43.944284 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:53708.service - OpenSSH per-connection server daemon (10.0.0.1:53708). Sep 12 23:08:43.950108 systemd-logind[1573]: Removed session 2. Sep 12 23:08:44.028106 sshd[1716]: Accepted publickey for core from 10.0.0.1 port 53708 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:08:44.031537 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:44.060260 systemd-logind[1573]: New session 3 of user core. Sep 12 23:08:44.072252 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:08:44.188669 sshd[1720]: Connection closed by 10.0.0.1 port 53708 Sep 12 23:08:44.186826 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:44.204957 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:53708.service: Deactivated successfully. Sep 12 23:08:44.212366 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:08:44.221714 systemd-logind[1573]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:08:44.226872 systemd-logind[1573]: Removed session 3. Sep 12 23:08:45.411954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:08:45.414284 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:08:45.418461 systemd[1]: Startup finished in 5.487s (kernel) + 9.553s (initrd) + 8.272s (userspace) = 23.312s. Sep 12 23:08:45.449629 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:08:46.539714 kubelet[1730]: E0912 23:08:46.539590 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:08:46.550104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:08:46.550342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:08:46.553076 systemd[1]: kubelet.service: Consumed 2.773s CPU time, 265M memory peak. Sep 12 23:08:54.356397 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:59154.service - OpenSSH per-connection server daemon (10.0.0.1:59154). Sep 12 23:08:54.418865 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 59154 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:08:54.421115 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:54.428064 systemd-logind[1573]: New session 4 of user core. Sep 12 23:08:54.438214 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:08:54.499998 sshd[1746]: Connection closed by 10.0.0.1 port 59154 Sep 12 23:08:54.500252 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:54.517049 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:59154.service: Deactivated successfully. Sep 12 23:08:54.519540 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:08:54.520613 systemd-logind[1573]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:08:54.524330 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:59160.service - OpenSSH per-connection server daemon (10.0.0.1:59160). Sep 12 23:08:54.525184 systemd-logind[1573]: Removed session 4. Sep 12 23:08:54.591125 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 59160 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:08:54.593042 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:54.600453 systemd-logind[1573]: New session 5 of user core. Sep 12 23:08:54.608972 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:08:54.663867 sshd[1755]: Connection closed by 10.0.0.1 port 59160 Sep 12 23:08:54.664668 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:54.675624 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:59160.service: Deactivated successfully. Sep 12 23:08:54.677977 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:08:54.678926 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:08:54.682309 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:59176.service - OpenSSH per-connection server daemon (10.0.0.1:59176). Sep 12 23:08:54.683327 systemd-logind[1573]: Removed session 5. Sep 12 23:08:54.742267 sshd[1761]: Accepted publickey for core from 10.0.0.1 port 59176 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:08:54.744088 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:54.750041 systemd-logind[1573]: New session 6 of user core. Sep 12 23:08:54.762099 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:08:54.822958 sshd[1764]: Connection closed by 10.0.0.1 port 59176 Sep 12 23:08:54.823430 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:54.846643 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:59176.service: Deactivated successfully. Sep 12 23:08:54.848833 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:08:54.850682 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:08:54.853215 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:59192.service - OpenSSH per-connection server daemon (10.0.0.1:59192). Sep 12 23:08:54.854739 systemd-logind[1573]: Removed session 6. Sep 12 23:08:54.910168 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 59192 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:08:54.912113 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:54.919187 systemd-logind[1573]: New session 7 of user core. Sep 12 23:08:54.933088 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:08:55.000787 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:08:55.001174 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:08:55.017567 sudo[1775]: pam_unix(sudo:session): session closed for user root Sep 12 23:08:55.019948 sshd[1774]: Connection closed by 10.0.0.1 port 59192 Sep 12 23:08:55.021404 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:55.029415 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:59192.service: Deactivated successfully. Sep 12 23:08:55.031679 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:08:55.032834 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:08:55.036407 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:59196.service - OpenSSH per-connection server daemon (10.0.0.1:59196). Sep 12 23:08:55.037388 systemd-logind[1573]: Removed session 7. Sep 12 23:08:55.105435 sshd[1781]: Accepted publickey for core from 10.0.0.1 port 59196 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:08:55.107602 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:55.113498 systemd-logind[1573]: New session 8 of user core. Sep 12 23:08:55.124034 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:08:55.181183 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:08:55.181550 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:08:55.193387 sudo[1786]: pam_unix(sudo:session): session closed for user root Sep 12 23:08:55.201255 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 23:08:55.201621 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:08:55.213381 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:08:55.264921 augenrules[1808]: No rules Sep 12 23:08:55.266820 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:08:55.267121 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:08:55.268341 sudo[1785]: pam_unix(sudo:session): session closed for user root Sep 12 23:08:55.270577 sshd[1784]: Connection closed by 10.0.0.1 port 59196 Sep 12 23:08:55.271046 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Sep 12 23:08:55.281336 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:59196.service: Deactivated successfully. Sep 12 23:08:55.283577 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:08:55.284530 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:08:55.287653 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:59206.service - OpenSSH per-connection server daemon (10.0.0.1:59206). Sep 12 23:08:55.288732 systemd-logind[1573]: Removed session 8. Sep 12 23:08:55.355750 sshd[1817]: Accepted publickey for core from 10.0.0.1 port 59206 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:08:55.357285 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:08:55.363649 systemd-logind[1573]: New session 9 of user core. Sep 12 23:08:55.381087 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:08:55.437751 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:08:55.438140 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:08:56.371330 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:08:56.396987 (dockerd)[1841]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:08:56.583195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:08:56.586201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:08:56.787804 dockerd[1841]: time="2025-09-12T23:08:56.787019248Z" level=info msg="Starting up" Sep 12 23:08:56.791187 dockerd[1841]: time="2025-09-12T23:08:56.791136794Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 23:08:56.813753 dockerd[1841]: time="2025-09-12T23:08:56.813686875Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 23:08:57.007115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:08:57.013042 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:08:57.296659 kubelet[1874]: E0912 23:08:57.296545 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:08:57.303646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:08:57.303945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:08:57.304434 systemd[1]: kubelet.service: Consumed 345ms CPU time, 111M memory peak. Sep 12 23:08:58.056912 dockerd[1841]: time="2025-09-12T23:08:58.056821521Z" level=info msg="Loading containers: start." Sep 12 23:08:58.071815 kernel: Initializing XFRM netlink socket Sep 12 23:08:58.453429 systemd-networkd[1480]: docker0: Link UP Sep 12 23:08:58.460929 dockerd[1841]: time="2025-09-12T23:08:58.460442687Z" level=info msg="Loading containers: done." Sep 12 23:08:58.477547 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1095760333-merged.mount: Deactivated successfully. Sep 12 23:08:58.479602 dockerd[1841]: time="2025-09-12T23:08:58.479554132Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:08:58.479692 dockerd[1841]: time="2025-09-12T23:08:58.479643762Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 23:08:58.479737 dockerd[1841]: time="2025-09-12T23:08:58.479723989Z" level=info msg="Initializing buildkit" Sep 12 23:08:58.517986 dockerd[1841]: time="2025-09-12T23:08:58.517925193Z" level=info msg="Completed buildkit initialization" Sep 12 23:08:58.523865 dockerd[1841]: time="2025-09-12T23:08:58.523833517Z" level=info msg="Daemon has completed initialization" Sep 12 23:08:58.523971 dockerd[1841]: time="2025-09-12T23:08:58.523928799Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:08:58.524165 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:08:59.377233 containerd[1590]: time="2025-09-12T23:08:59.377141877Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 23:09:00.338021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436389194.mount: Deactivated successfully. Sep 12 23:09:03.043011 containerd[1590]: time="2025-09-12T23:09:03.042927224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:03.043894 containerd[1590]: time="2025-09-12T23:09:03.043865827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 23:09:03.045629 containerd[1590]: time="2025-09-12T23:09:03.045585668Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:03.049824 containerd[1590]: time="2025-09-12T23:09:03.049738646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:03.050873 containerd[1590]: time="2025-09-12T23:09:03.050817777Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.673598228s" Sep 12 23:09:03.050873 containerd[1590]: time="2025-09-12T23:09:03.050864638Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 23:09:03.051789 containerd[1590]: time="2025-09-12T23:09:03.051577273Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 23:09:04.475082 containerd[1590]: time="2025-09-12T23:09:04.475013962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:04.476197 containerd[1590]: time="2025-09-12T23:09:04.476145478Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 23:09:04.478644 containerd[1590]: time="2025-09-12T23:09:04.478602426Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:04.482335 containerd[1590]: time="2025-09-12T23:09:04.482283904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:04.483463 containerd[1590]: time="2025-09-12T23:09:04.483408755Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.431781173s" Sep 12 23:09:04.483463 containerd[1590]: time="2025-09-12T23:09:04.483447645Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 23:09:04.484047 containerd[1590]: time="2025-09-12T23:09:04.484001736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 23:09:07.141608 containerd[1590]: time="2025-09-12T23:09:07.140383765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:07.142666 containerd[1590]: time="2025-09-12T23:09:07.142614196Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 23:09:07.147839 containerd[1590]: time="2025-09-12T23:09:07.145926912Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:07.152192 containerd[1590]: time="2025-09-12T23:09:07.150082752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:07.152192 containerd[1590]: time="2025-09-12T23:09:07.151514044Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.66746331s" Sep 12 23:09:07.152192 containerd[1590]: time="2025-09-12T23:09:07.151561849Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 23:09:07.154033 containerd[1590]: time="2025-09-12T23:09:07.153745820Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 23:09:07.333350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:09:07.345064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:09:07.801579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:09:07.829884 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:09:08.188670 kubelet[2150]: E0912 23:09:08.188578 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:09:08.193488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:09:08.193734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:09:08.194195 systemd[1]: kubelet.service: Consumed 471ms CPU time, 111.1M memory peak. Sep 12 23:09:09.145233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786643594.mount: Deactivated successfully. Sep 12 23:09:10.619312 containerd[1590]: time="2025-09-12T23:09:10.619247823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:10.620797 containerd[1590]: time="2025-09-12T23:09:10.620665626Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 23:09:10.622056 containerd[1590]: time="2025-09-12T23:09:10.622019661Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:10.624854 containerd[1590]: time="2025-09-12T23:09:10.624651016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:10.625511 containerd[1590]: time="2025-09-12T23:09:10.625442361Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.471637065s" Sep 12 23:09:10.625511 containerd[1590]: time="2025-09-12T23:09:10.625506130Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 23:09:10.626269 containerd[1590]: time="2025-09-12T23:09:10.626221173Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:09:11.265237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount198151496.mount: Deactivated successfully. Sep 12 23:09:14.882795 containerd[1590]: time="2025-09-12T23:09:14.882654293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:14.891291 containerd[1590]: time="2025-09-12T23:09:14.891215770Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 23:09:14.949671 containerd[1590]: time="2025-09-12T23:09:14.949529931Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:14.960690 containerd[1590]: time="2025-09-12T23:09:14.955492955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:14.960690 containerd[1590]: time="2025-09-12T23:09:14.959450658Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 4.333182s" Sep 12 23:09:14.960690 containerd[1590]: time="2025-09-12T23:09:14.959516048Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 23:09:14.960690 containerd[1590]: time="2025-09-12T23:09:14.960539206Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:09:16.054049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430674249.mount: Deactivated successfully. Sep 12 23:09:16.091115 containerd[1590]: time="2025-09-12T23:09:16.090119702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:09:16.094049 containerd[1590]: time="2025-09-12T23:09:16.093966791Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 23:09:16.097409 containerd[1590]: time="2025-09-12T23:09:16.097336617Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:09:16.101151 containerd[1590]: time="2025-09-12T23:09:16.101057428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:09:16.101861 containerd[1590]: time="2025-09-12T23:09:16.101804723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.141231609s" Sep 12 23:09:16.101925 containerd[1590]: time="2025-09-12T23:09:16.101866599Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 23:09:16.102521 containerd[1590]: time="2025-09-12T23:09:16.102465067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 23:09:16.849912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173811474.mount: Deactivated successfully. Sep 12 23:09:18.333033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 23:09:18.334937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:09:18.656602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:09:18.662817 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:09:18.901686 kubelet[2244]: E0912 23:09:18.901532 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:09:18.907174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:09:18.907397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:09:18.907992 systemd[1]: kubelet.service: Consumed 258ms CPU time, 110.1M memory peak. Sep 12 23:09:21.643111 containerd[1590]: time="2025-09-12T23:09:21.643058866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:21.643968 containerd[1590]: time="2025-09-12T23:09:21.643936402Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 23:09:21.645585 containerd[1590]: time="2025-09-12T23:09:21.645548858Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:21.648479 containerd[1590]: time="2025-09-12T23:09:21.648427822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:21.650038 containerd[1590]: time="2025-09-12T23:09:21.649989899Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.547488181s" Sep 12 23:09:21.650094 containerd[1590]: time="2025-09-12T23:09:21.650038945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 23:09:24.303167 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:09:24.303374 systemd[1]: kubelet.service: Consumed 258ms CPU time, 110.1M memory peak. Sep 12 23:09:24.306284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:09:24.341269 systemd[1]: Reload requested from client PID 2324 ('systemctl') (unit session-9.scope)... Sep 12 23:09:24.341291 systemd[1]: Reloading... Sep 12 23:09:24.460805 zram_generator::config[2371]: No configuration found. Sep 12 23:09:25.106556 systemd[1]: Reloading finished in 764 ms. Sep 12 23:09:25.185895 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:09:25.186010 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:09:25.186408 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:09:25.186457 systemd[1]: kubelet.service: Consumed 173ms CPU time, 98.4M memory peak. Sep 12 23:09:25.188288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:09:25.405825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:09:25.424271 (kubelet)[2415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:09:25.477483 kubelet[2415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:09:25.477483 kubelet[2415]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:09:25.477483 kubelet[2415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:09:25.477960 kubelet[2415]: I0912 23:09:25.477554 2415 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:09:25.886875 kubelet[2415]: I0912 23:09:25.886813 2415 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 23:09:25.886875 kubelet[2415]: I0912 23:09:25.886852 2415 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:09:25.887256 kubelet[2415]: I0912 23:09:25.887217 2415 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 23:09:25.915168 kubelet[2415]: E0912 23:09:25.915085 2415 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:09:25.916865 kubelet[2415]: I0912 23:09:25.916820 2415 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:09:25.926222 kubelet[2415]: I0912 23:09:25.926186 2415 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:09:25.933891 kubelet[2415]: I0912 23:09:25.933836 2415 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:09:25.934866 kubelet[2415]: I0912 23:09:25.934812 2415 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:09:25.935103 kubelet[2415]: I0912 23:09:25.934856 2415 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:09:25.935262 kubelet[2415]: I0912 23:09:25.935109 2415 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:09:25.935262 kubelet[2415]: I0912 23:09:25.935121 2415 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 23:09:25.935359 kubelet[2415]: I0912 23:09:25.935340 2415 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:09:25.938881 kubelet[2415]: I0912 23:09:25.938850 2415 kubelet.go:446] "Attempting to sync node with API server" Sep 12 23:09:25.938934 kubelet[2415]: I0912 23:09:25.938888 2415 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:09:25.938934 kubelet[2415]: I0912 23:09:25.938923 2415 kubelet.go:352] "Adding apiserver pod source" Sep 12 23:09:25.938985 kubelet[2415]: I0912 23:09:25.938939 2415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:09:25.942053 kubelet[2415]: I0912 23:09:25.942029 2415 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 23:09:25.942479 kubelet[2415]: I0912 23:09:25.942429 2415 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:09:25.944612 kubelet[2415]: W0912 23:09:25.944100 2415 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:09:25.946599 kubelet[2415]: W0912 23:09:25.946533 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Sep 12 23:09:25.946683 kubelet[2415]: E0912 23:09:25.946612 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:09:25.947692 kubelet[2415]: W0912 23:09:25.947625 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Sep 12 23:09:25.947750 kubelet[2415]: E0912 23:09:25.947707 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:09:25.949442 kubelet[2415]: I0912 23:09:25.949413 2415 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:09:25.949522 kubelet[2415]: I0912 23:09:25.949480 2415 server.go:1287] "Started kubelet" Sep 12 23:09:25.950360 kubelet[2415]: I0912 23:09:25.950261 2415 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:09:25.955939 kubelet[2415]: I0912 23:09:25.954595 2415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:09:25.955939 kubelet[2415]: I0912 23:09:25.954745 2415 server.go:479] "Adding debug handlers to kubelet server" Sep 12 23:09:25.955939 kubelet[2415]: I0912 23:09:25.955191 2415 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:09:25.956627 kubelet[2415]: E0912 23:09:25.955263 2415 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864abb912868640 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 23:09:25.9494416 +0000 UTC m=+0.520656191,LastTimestamp:2025-09-12 23:09:25.9494416 +0000 UTC m=+0.520656191,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 23:09:25.956895 kubelet[2415]: I0912 23:09:25.956875 2415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:09:25.957563 kubelet[2415]: I0912 23:09:25.956911 2415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:09:25.957694 kubelet[2415]: I0912 23:09:25.957670 2415 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:09:25.957901 kubelet[2415]: E0912 23:09:25.957843 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:25.958055 kubelet[2415]: I0912 23:09:25.958027 2415 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:09:25.958176 kubelet[2415]: I0912 23:09:25.958138 2415 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:09:25.958250 kubelet[2415]: E0912 23:09:25.958213 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Sep 12 23:09:25.958850 kubelet[2415]: E0912 23:09:25.958816 2415 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:09:25.958972 kubelet[2415]: W0912 23:09:25.958928 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Sep 12 23:09:25.959036 kubelet[2415]: E0912 23:09:25.958977 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:09:25.959321 kubelet[2415]: I0912 23:09:25.959276 2415 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:09:25.959399 kubelet[2415]: I0912 23:09:25.959373 2415 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:09:25.960570 kubelet[2415]: I0912 23:09:25.960542 2415 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:09:25.978973 kubelet[2415]: I0912 23:09:25.978934 2415 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:09:25.978973 kubelet[2415]: I0912 23:09:25.978957 2415 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:09:25.978973 kubelet[2415]: I0912 23:09:25.978980 2415 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:09:25.979385 kubelet[2415]: I0912 23:09:25.979312 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:09:25.980794 kubelet[2415]: I0912 23:09:25.980750 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:09:25.980840 kubelet[2415]: I0912 23:09:25.980815 2415 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 23:09:25.981418 kubelet[2415]: I0912 23:09:25.981003 2415 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:09:25.981418 kubelet[2415]: I0912 23:09:25.981021 2415 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 23:09:25.981418 kubelet[2415]: E0912 23:09:25.981070 2415 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:09:26.059005 kubelet[2415]: E0912 23:09:26.058935 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:26.081340 kubelet[2415]: E0912 23:09:26.081252 2415 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:09:26.128628 update_engine[1576]: I20250912 23:09:26.128517 1576 update_attempter.cc:509] Updating boot flags... Sep 12 23:09:26.159164 kubelet[2415]: E0912 23:09:26.158994 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:26.159293 kubelet[2415]: E0912 23:09:26.159196 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Sep 12 23:09:26.259799 kubelet[2415]: E0912 23:09:26.259721 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:26.282108 kubelet[2415]: E0912 23:09:26.282034 2415 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:09:26.360919 kubelet[2415]: E0912 23:09:26.360739 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:26.461924 kubelet[2415]: E0912 23:09:26.461691 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:26.560924 kubelet[2415]: E0912 23:09:26.560859 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Sep 12 23:09:26.561859 kubelet[2415]: E0912 23:09:26.561807 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:26.662441 kubelet[2415]: E0912 23:09:26.662318 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:26.682665 kubelet[2415]: E0912 23:09:26.682582 2415 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:09:26.763195 kubelet[2415]: E0912 23:09:26.763040 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:26.863633 kubelet[2415]: E0912 23:09:26.863526 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:26.879377 kubelet[2415]: W0912 23:09:26.879321 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Sep 12 23:09:26.879461 kubelet[2415]: E0912 23:09:26.879374 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:09:26.963971 kubelet[2415]: E0912 23:09:26.963923 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:27.064696 kubelet[2415]: E0912 23:09:27.064508 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:27.112312 kubelet[2415]: W0912 23:09:27.112249 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Sep 12 23:09:27.112312 kubelet[2415]: E0912 23:09:27.112303 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:09:27.164906 kubelet[2415]: E0912 23:09:27.164840 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:27.170928 kubelet[2415]: W0912 23:09:27.170844 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Sep 12 23:09:27.170928 kubelet[2415]: E0912 23:09:27.170923 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:09:27.210567 kubelet[2415]: I0912 23:09:27.210396 2415 policy_none.go:49] "None policy: Start" Sep 12 23:09:27.210567 kubelet[2415]: I0912 23:09:27.210449 2415 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:09:27.210567 kubelet[2415]: I0912 23:09:27.210471 2415 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:09:27.265805 kubelet[2415]: E0912 23:09:27.265060 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:27.342186 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:09:27.362369 kubelet[2415]: E0912 23:09:27.362232 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="1.6s" Sep 12 23:09:27.366811 kubelet[2415]: E0912 23:09:27.365751 2415 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:09:27.383155 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:09:27.411623 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:09:27.435299 kubelet[2415]: I0912 23:09:27.435196 2415 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:09:27.435558 kubelet[2415]: I0912 23:09:27.435535 2415 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:09:27.435602 kubelet[2415]: I0912 23:09:27.435556 2415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:09:27.436039 kubelet[2415]: I0912 23:09:27.436013 2415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:09:27.437016 kubelet[2415]: E0912 23:09:27.436982 2415 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:09:27.437246 kubelet[2415]: E0912 23:09:27.437073 2415 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 23:09:27.460786 kubelet[2415]: W0912 23:09:27.460681 2415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Sep 12 23:09:27.460895 kubelet[2415]: E0912 23:09:27.460802 2415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:09:27.495632 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 23:09:27.531071 kubelet[2415]: E0912 23:09:27.531013 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:09:27.535471 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 23:09:27.537358 kubelet[2415]: I0912 23:09:27.537318 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:09:27.537787 kubelet[2415]: E0912 23:09:27.537732 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 12 23:09:27.539256 kubelet[2415]: E0912 23:09:27.539229 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:09:27.541417 systemd[1]: Created slice kubepods-burstable-poddf416b65d45947c2d3c0934d5379a140.slice - libcontainer container kubepods-burstable-poddf416b65d45947c2d3c0934d5379a140.slice. Sep 12 23:09:27.543054 kubelet[2415]: E0912 23:09:27.543020 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:09:27.566611 kubelet[2415]: I0912 23:09:27.566519 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:09:27.566611 kubelet[2415]: I0912 23:09:27.566573 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df416b65d45947c2d3c0934d5379a140-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"df416b65d45947c2d3c0934d5379a140\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:27.566611 kubelet[2415]: I0912 23:09:27.566603 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df416b65d45947c2d3c0934d5379a140-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"df416b65d45947c2d3c0934d5379a140\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:27.566611 kubelet[2415]: I0912 23:09:27.566632 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:27.567329 kubelet[2415]: I0912 23:09:27.566651 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:27.567329 kubelet[2415]: I0912 23:09:27.566671 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:27.567329 kubelet[2415]: I0912 23:09:27.566691 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df416b65d45947c2d3c0934d5379a140-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"df416b65d45947c2d3c0934d5379a140\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:27.567329 kubelet[2415]: I0912 23:09:27.566728 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:27.567329 kubelet[2415]: I0912 23:09:27.566866 2415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:27.739664 kubelet[2415]: I0912 23:09:27.739515 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:09:27.739958 kubelet[2415]: E0912 23:09:27.739926 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 12 23:09:27.832141 kubelet[2415]: E0912 23:09:27.832090 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:27.832959 containerd[1590]: time="2025-09-12T23:09:27.832912111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 23:09:27.840139 kubelet[2415]: E0912 23:09:27.840119 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:27.840604 containerd[1590]: time="2025-09-12T23:09:27.840551816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 23:09:27.844024 kubelet[2415]: E0912 23:09:27.843990 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:27.844501 containerd[1590]: time="2025-09-12T23:09:27.844469698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:df416b65d45947c2d3c0934d5379a140,Namespace:kube-system,Attempt:0,}" Sep 12 23:09:27.886193 containerd[1590]: time="2025-09-12T23:09:27.886007875Z" level=info msg="connecting to shim 45b3cb3c56c92a33c123a939f0a50c36b14063869183af8357bc558e7f2cba35" address="unix:///run/containerd/s/3bf77fee6abe710cdd6517210358a2184a22a62fefebae8c021c2d7c19981321" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:09:27.887752 containerd[1590]: time="2025-09-12T23:09:27.887716897Z" level=info msg="connecting to shim 14814d6d845b18bf34bb87a245f63d9f28a425e8c0b05c4a174d39ce99032f94" address="unix:///run/containerd/s/3527f742b660d08889640ad165bab498cb5450c053c07cc4c12ea329b5b9e0e2" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:09:27.904022 containerd[1590]: time="2025-09-12T23:09:27.903969784Z" level=info msg="connecting to shim 6ea9284f7f09f78594e8ed9be0414221b1de9e64901a48880073c0f7d9454710" address="unix:///run/containerd/s/28925a3f84dd7143881b3ae2b21b64fa2d721c1af65b4750d62e3d7588158102" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:09:27.928936 systemd[1]: Started cri-containerd-14814d6d845b18bf34bb87a245f63d9f28a425e8c0b05c4a174d39ce99032f94.scope - libcontainer container 14814d6d845b18bf34bb87a245f63d9f28a425e8c0b05c4a174d39ce99032f94. Sep 12 23:09:27.930458 systemd[1]: Started cri-containerd-45b3cb3c56c92a33c123a939f0a50c36b14063869183af8357bc558e7f2cba35.scope - libcontainer container 45b3cb3c56c92a33c123a939f0a50c36b14063869183af8357bc558e7f2cba35. Sep 12 23:09:27.936971 systemd[1]: Started cri-containerd-6ea9284f7f09f78594e8ed9be0414221b1de9e64901a48880073c0f7d9454710.scope - libcontainer container 6ea9284f7f09f78594e8ed9be0414221b1de9e64901a48880073c0f7d9454710. Sep 12 23:09:27.982855 containerd[1590]: time="2025-09-12T23:09:27.982798016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"45b3cb3c56c92a33c123a939f0a50c36b14063869183af8357bc558e7f2cba35\"" Sep 12 23:09:27.985774 kubelet[2415]: E0912 23:09:27.985724 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:27.989885 containerd[1590]: time="2025-09-12T23:09:27.989229900Z" level=info msg="CreateContainer within sandbox \"45b3cb3c56c92a33c123a939f0a50c36b14063869183af8357bc558e7f2cba35\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:09:27.996585 containerd[1590]: time="2025-09-12T23:09:27.996521808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"14814d6d845b18bf34bb87a245f63d9f28a425e8c0b05c4a174d39ce99032f94\"" Sep 12 23:09:27.997221 containerd[1590]: time="2025-09-12T23:09:27.997178065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:df416b65d45947c2d3c0934d5379a140,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ea9284f7f09f78594e8ed9be0414221b1de9e64901a48880073c0f7d9454710\"" Sep 12 23:09:27.997707 kubelet[2415]: E0912 23:09:27.997512 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:27.998154 kubelet[2415]: E0912 23:09:27.998130 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:27.999101 containerd[1590]: time="2025-09-12T23:09:27.999068503Z" level=info msg="CreateContainer within sandbox \"14814d6d845b18bf34bb87a245f63d9f28a425e8c0b05c4a174d39ce99032f94\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:09:28.002499 containerd[1590]: time="2025-09-12T23:09:28.002463316Z" level=info msg="Container 9fc61457ad219795efffdfa69480717eb4c1d71ca3b958d1cbd929bc00e0e191: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:09:28.004719 containerd[1590]: time="2025-09-12T23:09:28.004630586Z" level=info msg="CreateContainer within sandbox \"6ea9284f7f09f78594e8ed9be0414221b1de9e64901a48880073c0f7d9454710\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:09:28.015463 containerd[1590]: time="2025-09-12T23:09:28.015400026Z" level=info msg="CreateContainer within sandbox \"45b3cb3c56c92a33c123a939f0a50c36b14063869183af8357bc558e7f2cba35\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9fc61457ad219795efffdfa69480717eb4c1d71ca3b958d1cbd929bc00e0e191\"" Sep 12 23:09:28.016040 containerd[1590]: time="2025-09-12T23:09:28.016008055Z" level=info msg="Container 70a02c1f08126de25d4365785c1ce38cf36546e6f54ce8c5bb31b555f8053b88: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:09:28.016201 containerd[1590]: time="2025-09-12T23:09:28.016177280Z" level=info msg="StartContainer for \"9fc61457ad219795efffdfa69480717eb4c1d71ca3b958d1cbd929bc00e0e191\"" Sep 12 23:09:28.017423 containerd[1590]: time="2025-09-12T23:09:28.017392738Z" level=info msg="connecting to shim 9fc61457ad219795efffdfa69480717eb4c1d71ca3b958d1cbd929bc00e0e191" address="unix:///run/containerd/s/3bf77fee6abe710cdd6517210358a2184a22a62fefebae8c021c2d7c19981321" protocol=ttrpc version=3 Sep 12 23:09:28.022496 containerd[1590]: time="2025-09-12T23:09:28.022448677Z" level=info msg="Container a7fe01dfe0f7b303597a339e86dad1833c82cb2dbff9ffe94055c024ff05697e: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:09:28.030377 containerd[1590]: time="2025-09-12T23:09:28.030338967Z" level=info msg="CreateContainer within sandbox \"14814d6d845b18bf34bb87a245f63d9f28a425e8c0b05c4a174d39ce99032f94\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"70a02c1f08126de25d4365785c1ce38cf36546e6f54ce8c5bb31b555f8053b88\"" Sep 12 23:09:28.031070 containerd[1590]: time="2025-09-12T23:09:28.031041867Z" level=info msg="StartContainer for \"70a02c1f08126de25d4365785c1ce38cf36546e6f54ce8c5bb31b555f8053b88\"" Sep 12 23:09:28.033901 containerd[1590]: time="2025-09-12T23:09:28.032011159Z" level=info msg="connecting to shim 70a02c1f08126de25d4365785c1ce38cf36546e6f54ce8c5bb31b555f8053b88" address="unix:///run/containerd/s/3527f742b660d08889640ad165bab498cb5450c053c07cc4c12ea329b5b9e0e2" protocol=ttrpc version=3 Sep 12 23:09:28.034974 containerd[1590]: time="2025-09-12T23:09:28.034940950Z" level=info msg="CreateContainer within sandbox \"6ea9284f7f09f78594e8ed9be0414221b1de9e64901a48880073c0f7d9454710\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a7fe01dfe0f7b303597a339e86dad1833c82cb2dbff9ffe94055c024ff05697e\"" Sep 12 23:09:28.035251 containerd[1590]: time="2025-09-12T23:09:28.035224349Z" level=info msg="StartContainer for \"a7fe01dfe0f7b303597a339e86dad1833c82cb2dbff9ffe94055c024ff05697e\"" Sep 12 23:09:28.036893 containerd[1590]: time="2025-09-12T23:09:28.036849506Z" level=info msg="connecting to shim a7fe01dfe0f7b303597a339e86dad1833c82cb2dbff9ffe94055c024ff05697e" address="unix:///run/containerd/s/28925a3f84dd7143881b3ae2b21b64fa2d721c1af65b4750d62e3d7588158102" protocol=ttrpc version=3 Sep 12 23:09:28.040030 systemd[1]: Started cri-containerd-9fc61457ad219795efffdfa69480717eb4c1d71ca3b958d1cbd929bc00e0e191.scope - libcontainer container 9fc61457ad219795efffdfa69480717eb4c1d71ca3b958d1cbd929bc00e0e191. Sep 12 23:09:28.054512 kubelet[2415]: E0912 23:09:28.054463 2415 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:09:28.055955 systemd[1]: Started cri-containerd-70a02c1f08126de25d4365785c1ce38cf36546e6f54ce8c5bb31b555f8053b88.scope - libcontainer container 70a02c1f08126de25d4365785c1ce38cf36546e6f54ce8c5bb31b555f8053b88. Sep 12 23:09:28.061192 systemd[1]: Started cri-containerd-a7fe01dfe0f7b303597a339e86dad1833c82cb2dbff9ffe94055c024ff05697e.scope - libcontainer container a7fe01dfe0f7b303597a339e86dad1833c82cb2dbff9ffe94055c024ff05697e. Sep 12 23:09:28.117145 containerd[1590]: time="2025-09-12T23:09:28.117087989Z" level=info msg="StartContainer for \"9fc61457ad219795efffdfa69480717eb4c1d71ca3b958d1cbd929bc00e0e191\" returns successfully" Sep 12 23:09:28.127623 containerd[1590]: time="2025-09-12T23:09:28.127567365Z" level=info msg="StartContainer for \"a7fe01dfe0f7b303597a339e86dad1833c82cb2dbff9ffe94055c024ff05697e\" returns successfully" Sep 12 23:09:28.131877 containerd[1590]: time="2025-09-12T23:09:28.131840968Z" level=info msg="StartContainer for \"70a02c1f08126de25d4365785c1ce38cf36546e6f54ce8c5bb31b555f8053b88\" returns successfully" Sep 12 23:09:28.143005 kubelet[2415]: I0912 23:09:28.142964 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:09:28.143445 kubelet[2415]: E0912 23:09:28.143412 2415 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 12 23:09:28.946290 kubelet[2415]: I0912 23:09:28.946246 2415 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:09:28.995277 kubelet[2415]: E0912 23:09:28.995236 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:09:28.995411 kubelet[2415]: E0912 23:09:28.995351 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:28.997285 kubelet[2415]: E0912 23:09:28.997262 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:09:28.997355 kubelet[2415]: E0912 23:09:28.997342 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:28.998870 kubelet[2415]: E0912 23:09:28.998851 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:09:28.998960 kubelet[2415]: E0912 23:09:28.998939 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:29.148081 kubelet[2415]: E0912 23:09:29.148025 2415 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 23:09:29.953980 kubelet[2415]: I0912 23:09:29.953910 2415 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 23:09:29.953980 kubelet[2415]: E0912 23:09:29.953962 2415 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 23:09:30.001242 kubelet[2415]: E0912 23:09:30.001188 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:09:30.001411 kubelet[2415]: E0912 23:09:30.001334 2415 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:09:30.001411 kubelet[2415]: E0912 23:09:30.001351 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:30.001494 kubelet[2415]: E0912 23:09:30.001462 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:30.059035 kubelet[2415]: I0912 23:09:30.058985 2415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:30.083391 kubelet[2415]: E0912 23:09:30.083294 2415 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:30.084265 kubelet[2415]: I0912 23:09:30.083815 2415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:30.087632 kubelet[2415]: E0912 23:09:30.087598 2415 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:30.087815 kubelet[2415]: I0912 23:09:30.087791 2415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:09:30.090432 kubelet[2415]: E0912 23:09:30.090180 2415 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 23:09:30.941956 kubelet[2415]: I0912 23:09:30.941904 2415 apiserver.go:52] "Watching apiserver" Sep 12 23:09:30.958382 kubelet[2415]: I0912 23:09:30.958285 2415 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:09:31.001969 kubelet[2415]: I0912 23:09:31.001929 2415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:09:31.008606 kubelet[2415]: E0912 23:09:31.008564 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:32.004043 kubelet[2415]: E0912 23:09:32.003892 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:32.364457 systemd[1]: Reload requested from client PID 2705 ('systemctl') (unit session-9.scope)... Sep 12 23:09:32.364478 systemd[1]: Reloading... Sep 12 23:09:32.449809 zram_generator::config[2748]: No configuration found. Sep 12 23:09:32.631079 kubelet[2415]: I0912 23:09:32.630914 2415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:32.639179 kubelet[2415]: E0912 23:09:32.639095 2415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:32.714596 systemd[1]: Reloading finished in 349 ms. Sep 12 23:09:32.747995 kubelet[2415]: I0912 23:09:32.747918 2415 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:09:32.748216 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:09:32.772099 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:09:32.772568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:09:32.772642 systemd[1]: kubelet.service: Consumed 1.083s CPU time, 132.3M memory peak. Sep 12 23:09:32.775122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:09:33.036035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:09:33.053257 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:09:33.098498 kubelet[2793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:09:33.098498 kubelet[2793]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:09:33.098498 kubelet[2793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:09:33.099019 kubelet[2793]: I0912 23:09:33.098575 2793 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:09:33.107229 kubelet[2793]: I0912 23:09:33.107181 2793 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 23:09:33.107229 kubelet[2793]: I0912 23:09:33.107216 2793 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:09:33.107514 kubelet[2793]: I0912 23:09:33.107489 2793 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 23:09:33.108739 kubelet[2793]: I0912 23:09:33.108711 2793 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 23:09:33.111061 kubelet[2793]: I0912 23:09:33.110991 2793 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:09:33.115938 kubelet[2793]: I0912 23:09:33.115717 2793 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:09:33.122447 kubelet[2793]: I0912 23:09:33.122391 2793 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:09:33.122705 kubelet[2793]: I0912 23:09:33.122672 2793 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:09:33.122949 kubelet[2793]: I0912 23:09:33.122703 2793 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:09:33.122949 kubelet[2793]: I0912 23:09:33.122950 2793 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:09:33.123091 kubelet[2793]: I0912 23:09:33.122963 2793 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 23:09:33.123091 kubelet[2793]: I0912 23:09:33.123020 2793 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:09:33.123261 kubelet[2793]: I0912 23:09:33.123242 2793 kubelet.go:446] "Attempting to sync node with API server" Sep 12 23:09:33.123303 kubelet[2793]: I0912 23:09:33.123272 2793 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:09:33.123303 kubelet[2793]: I0912 23:09:33.123298 2793 kubelet.go:352] "Adding apiserver pod source" Sep 12 23:09:33.123384 kubelet[2793]: I0912 23:09:33.123310 2793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:09:33.125604 kubelet[2793]: I0912 23:09:33.124365 2793 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 23:09:33.125604 kubelet[2793]: I0912 23:09:33.124884 2793 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:09:33.125915 kubelet[2793]: I0912 23:09:33.125892 2793 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:09:33.125983 kubelet[2793]: I0912 23:09:33.125935 2793 server.go:1287] "Started kubelet" Sep 12 23:09:33.131881 kubelet[2793]: I0912 23:09:33.131811 2793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:09:33.132149 kubelet[2793]: I0912 23:09:33.132121 2793 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:09:33.132203 kubelet[2793]: I0912 23:09:33.132180 2793 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:09:33.133306 kubelet[2793]: I0912 23:09:33.133272 2793 server.go:479] "Adding debug handlers to kubelet server" Sep 12 23:09:33.182982 kubelet[2793]: E0912 23:09:33.136397 2793 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:09:33.183069 kubelet[2793]: I0912 23:09:33.183027 2793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:09:33.183188 kubelet[2793]: I0912 23:09:33.183112 2793 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:09:33.183952 kubelet[2793]: I0912 23:09:33.183930 2793 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:09:33.184882 kubelet[2793]: I0912 23:09:33.184841 2793 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:09:33.185316 kubelet[2793]: I0912 23:09:33.185287 2793 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:09:33.186867 kubelet[2793]: I0912 23:09:33.186787 2793 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:09:33.186928 kubelet[2793]: I0912 23:09:33.186905 2793 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:09:33.188964 kubelet[2793]: I0912 23:09:33.188933 2793 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:09:33.200610 kubelet[2793]: I0912 23:09:33.200185 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:09:33.202682 kubelet[2793]: I0912 23:09:33.202645 2793 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:09:33.202682 kubelet[2793]: I0912 23:09:33.202674 2793 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 23:09:33.202833 kubelet[2793]: I0912 23:09:33.202695 2793 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:09:33.202833 kubelet[2793]: I0912 23:09:33.202703 2793 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 23:09:33.202833 kubelet[2793]: E0912 23:09:33.202759 2793 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:09:33.232754 kubelet[2793]: I0912 23:09:33.232716 2793 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:09:33.232754 kubelet[2793]: I0912 23:09:33.232737 2793 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:09:33.232754 kubelet[2793]: I0912 23:09:33.232758 2793 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:09:33.232994 kubelet[2793]: I0912 23:09:33.232964 2793 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:09:33.232994 kubelet[2793]: I0912 23:09:33.232978 2793 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:09:33.233069 kubelet[2793]: I0912 23:09:33.233002 2793 policy_none.go:49] "None policy: Start" Sep 12 23:09:33.233069 kubelet[2793]: I0912 23:09:33.233015 2793 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:09:33.233069 kubelet[2793]: I0912 23:09:33.233029 2793 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:09:33.233186 kubelet[2793]: I0912 23:09:33.233171 2793 state_mem.go:75] "Updated machine memory state" Sep 12 23:09:33.238091 kubelet[2793]: I0912 23:09:33.238054 2793 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:09:33.238637 kubelet[2793]: I0912 23:09:33.238603 2793 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:09:33.238702 kubelet[2793]: I0912 23:09:33.238625 2793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:09:33.238939 kubelet[2793]: I0912 23:09:33.238907 2793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:09:33.240982 kubelet[2793]: E0912 23:09:33.240934 2793 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:09:33.305047 kubelet[2793]: I0912 23:09:33.304295 2793 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:33.305047 kubelet[2793]: I0912 23:09:33.304381 2793 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:09:33.305047 kubelet[2793]: I0912 23:09:33.304484 2793 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:33.350220 kubelet[2793]: I0912 23:09:33.350178 2793 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:09:33.358167 kubelet[2793]: E0912 23:09:33.358108 2793 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 23:09:33.358664 kubelet[2793]: E0912 23:09:33.358560 2793 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:33.386074 kubelet[2793]: I0912 23:09:33.385983 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df416b65d45947c2d3c0934d5379a140-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"df416b65d45947c2d3c0934d5379a140\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:33.422106 kubelet[2793]: I0912 23:09:33.421928 2793 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 23:09:33.422106 kubelet[2793]: I0912 23:09:33.422056 2793 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 23:09:33.486585 kubelet[2793]: I0912 23:09:33.486477 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:33.486585 kubelet[2793]: I0912 23:09:33.486561 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:33.486921 kubelet[2793]: I0912 23:09:33.486733 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:09:33.486921 kubelet[2793]: I0912 23:09:33.486807 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df416b65d45947c2d3c0934d5379a140-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"df416b65d45947c2d3c0934d5379a140\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:33.486921 kubelet[2793]: I0912 23:09:33.486831 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df416b65d45947c2d3c0934d5379a140-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"df416b65d45947c2d3c0934d5379a140\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:33.486921 kubelet[2793]: I0912 23:09:33.486855 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:33.486921 kubelet[2793]: I0912 23:09:33.486881 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:33.487083 kubelet[2793]: I0912 23:09:33.486899 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:09:33.649572 kubelet[2793]: E0912 23:09:33.649525 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:33.658852 kubelet[2793]: E0912 23:09:33.658795 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:33.659051 kubelet[2793]: E0912 23:09:33.659022 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:33.709165 sudo[2832]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 23:09:33.709591 sudo[2832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 23:09:34.039897 sudo[2832]: pam_unix(sudo:session): session closed for user root Sep 12 23:09:34.124029 kubelet[2793]: I0912 23:09:34.123984 2793 apiserver.go:52] "Watching apiserver" Sep 12 23:09:34.185819 kubelet[2793]: I0912 23:09:34.185750 2793 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:09:34.214818 kubelet[2793]: I0912 23:09:34.214438 2793 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:34.214818 kubelet[2793]: E0912 23:09:34.214479 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:34.214818 kubelet[2793]: E0912 23:09:34.214824 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:34.223217 kubelet[2793]: E0912 23:09:34.223169 2793 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 23:09:34.223432 kubelet[2793]: E0912 23:09:34.223393 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:34.241140 kubelet[2793]: I0912 23:09:34.241059 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.2410256 podStartE2EDuration="1.2410256s" podCreationTimestamp="2025-09-12 23:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:09:34.231332954 +0000 UTC m=+1.173672076" watchObservedRunningTime="2025-09-12 23:09:34.2410256 +0000 UTC m=+1.183364712" Sep 12 23:09:34.249092 kubelet[2793]: I0912 23:09:34.249004 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.248980523 podStartE2EDuration="3.248980523s" podCreationTimestamp="2025-09-12 23:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:09:34.241309767 +0000 UTC m=+1.183648879" watchObservedRunningTime="2025-09-12 23:09:34.248980523 +0000 UTC m=+1.191319635" Sep 12 23:09:34.259980 kubelet[2793]: I0912 23:09:34.259884 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.259858661 podStartE2EDuration="2.259858661s" podCreationTimestamp="2025-09-12 23:09:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:09:34.249331393 +0000 UTC m=+1.191670505" watchObservedRunningTime="2025-09-12 23:09:34.259858661 +0000 UTC m=+1.202197783" Sep 12 23:09:35.216843 kubelet[2793]: E0912 23:09:35.216789 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:35.218472 kubelet[2793]: E0912 23:09:35.218377 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:35.881434 sudo[1821]: pam_unix(sudo:session): session closed for user root Sep 12 23:09:35.886549 sshd[1820]: Connection closed by 10.0.0.1 port 59206 Sep 12 23:09:35.887346 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Sep 12 23:09:35.895129 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:59206.service: Deactivated successfully. Sep 12 23:09:35.897598 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:09:35.899327 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:09:35.903349 systemd[1]: session-9.scope: Consumed 5.162s CPU time, 265.9M memory peak. Sep 12 23:09:35.909272 systemd-logind[1573]: Removed session 9. Sep 12 23:09:36.218101 kubelet[2793]: E0912 23:09:36.217960 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:38.233021 kubelet[2793]: I0912 23:09:38.232979 2793 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:09:38.233442 containerd[1590]: time="2025-09-12T23:09:38.233332016Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:09:38.233707 kubelet[2793]: I0912 23:09:38.233521 2793 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:09:38.332080 kubelet[2793]: E0912 23:09:38.332042 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:39.041719 kubelet[2793]: I0912 23:09:39.041666 2793 status_manager.go:890] "Failed to get status for pod" podUID="c5b5956c-54ec-40ce-9260-91993ddd6d4d" pod="kube-system/kube-proxy-vqqff" err="pods \"kube-proxy-vqqff\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 12 23:09:39.044923 kubelet[2793]: W0912 23:09:39.044632 2793 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Sep 12 23:09:39.044923 kubelet[2793]: E0912 23:09:39.044678 2793 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 23:09:39.045335 kubelet[2793]: I0912 23:09:39.045313 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c5b5956c-54ec-40ce-9260-91993ddd6d4d-kube-proxy\") pod \"kube-proxy-vqqff\" (UID: \"c5b5956c-54ec-40ce-9260-91993ddd6d4d\") " pod="kube-system/kube-proxy-vqqff" Sep 12 23:09:39.045481 kubelet[2793]: I0912 23:09:39.045461 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5b5956c-54ec-40ce-9260-91993ddd6d4d-lib-modules\") pod \"kube-proxy-vqqff\" (UID: \"c5b5956c-54ec-40ce-9260-91993ddd6d4d\") " pod="kube-system/kube-proxy-vqqff" Sep 12 23:09:39.045645 kubelet[2793]: I0912 23:09:39.045562 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4vdd\" (UniqueName: \"kubernetes.io/projected/c5b5956c-54ec-40ce-9260-91993ddd6d4d-kube-api-access-v4vdd\") pod \"kube-proxy-vqqff\" (UID: \"c5b5956c-54ec-40ce-9260-91993ddd6d4d\") " pod="kube-system/kube-proxy-vqqff" Sep 12 23:09:39.045645 kubelet[2793]: I0912 23:09:39.045592 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5b5956c-54ec-40ce-9260-91993ddd6d4d-xtables-lock\") pod \"kube-proxy-vqqff\" (UID: \"c5b5956c-54ec-40ce-9260-91993ddd6d4d\") " pod="kube-system/kube-proxy-vqqff" Sep 12 23:09:39.054452 systemd[1]: Created slice kubepods-besteffort-podc5b5956c_54ec_40ce_9260_91993ddd6d4d.slice - libcontainer container kubepods-besteffort-podc5b5956c_54ec_40ce_9260_91993ddd6d4d.slice. Sep 12 23:09:39.095074 systemd[1]: Created slice kubepods-burstable-pod1b7c7a1e_d4be_45bf_ba2c_81df617bdf5c.slice - libcontainer container kubepods-burstable-pod1b7c7a1e_d4be_45bf_ba2c_81df617bdf5c.slice. Sep 12 23:09:39.141286 systemd[1]: Created slice kubepods-besteffort-podebf74b46_da1c_4e44_b6b6_77623d9c780f.slice - libcontainer container kubepods-besteffort-podebf74b46_da1c_4e44_b6b6_77623d9c780f.slice. Sep 12 23:09:39.224841 kubelet[2793]: E0912 23:09:39.224798 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:39.246182 kubelet[2793]: I0912 23:09:39.246129 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-host-proc-sys-net\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.246182 kubelet[2793]: I0912 23:09:39.246174 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-bpf-maps\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.246726 kubelet[2793]: I0912 23:09:39.246212 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-clustermesh-secrets\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.246726 kubelet[2793]: I0912 23:09:39.246232 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebf74b46-da1c-4e44-b6b6-77623d9c780f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4c6kw\" (UID: \"ebf74b46-da1c-4e44-b6b6-77623d9c780f\") " pod="kube-system/cilium-operator-6c4d7847fc-4c6kw" Sep 12 23:09:39.246726 kubelet[2793]: I0912 23:09:39.246279 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-etc-cni-netd\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.246875 kubelet[2793]: I0912 23:09:39.246794 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-config-path\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.246875 kubelet[2793]: I0912 23:09:39.246820 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74cwt\" (UniqueName: \"kubernetes.io/projected/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-kube-api-access-74cwt\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.246875 kubelet[2793]: I0912 23:09:39.246841 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk5sv\" (UniqueName: \"kubernetes.io/projected/ebf74b46-da1c-4e44-b6b6-77623d9c780f-kube-api-access-nk5sv\") pod \"cilium-operator-6c4d7847fc-4c6kw\" (UID: \"ebf74b46-da1c-4e44-b6b6-77623d9c780f\") " pod="kube-system/cilium-operator-6c4d7847fc-4c6kw" Sep 12 23:09:39.246875 kubelet[2793]: I0912 23:09:39.246861 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cni-path\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.247081 kubelet[2793]: I0912 23:09:39.246887 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-xtables-lock\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.247081 kubelet[2793]: I0912 23:09:39.246905 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-hostproc\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.247081 kubelet[2793]: I0912 23:09:39.246982 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-lib-modules\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.247168 kubelet[2793]: I0912 23:09:39.247100 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-run\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.247199 kubelet[2793]: I0912 23:09:39.247165 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-host-proc-sys-kernel\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.247249 kubelet[2793]: I0912 23:09:39.247222 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-cgroup\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.247285 kubelet[2793]: I0912 23:09:39.247251 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-hubble-tls\") pod \"cilium-7pnrh\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " pod="kube-system/cilium-7pnrh" Sep 12 23:09:39.405537 kubelet[2793]: E0912 23:09:39.405156 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:39.406091 containerd[1590]: time="2025-09-12T23:09:39.406039553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7pnrh,Uid:1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c,Namespace:kube-system,Attempt:0,}" Sep 12 23:09:39.447071 kubelet[2793]: E0912 23:09:39.447028 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:39.448716 containerd[1590]: time="2025-09-12T23:09:39.448636960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4c6kw,Uid:ebf74b46-da1c-4e44-b6b6-77623d9c780f,Namespace:kube-system,Attempt:0,}" Sep 12 23:09:39.463156 containerd[1590]: time="2025-09-12T23:09:39.463060403Z" level=info msg="connecting to shim 993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce" address="unix:///run/containerd/s/d216ce5f39af932b418587b061bfff33649736736c3ed32cb1c74b62e1df53c1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:09:39.478005 containerd[1590]: time="2025-09-12T23:09:39.477647630Z" level=info msg="connecting to shim 73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37" address="unix:///run/containerd/s/ed19ab74225f37e64b05784425b4d7958d8eb1f2fe875d76f6229e891b2813e0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:09:39.488145 systemd[1]: Started cri-containerd-993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce.scope - libcontainer container 993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce. Sep 12 23:09:39.509903 systemd[1]: Started cri-containerd-73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37.scope - libcontainer container 73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37. Sep 12 23:09:39.522846 containerd[1590]: time="2025-09-12T23:09:39.522807196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7pnrh,Uid:1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\"" Sep 12 23:09:39.523859 kubelet[2793]: E0912 23:09:39.523827 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:39.526092 containerd[1590]: time="2025-09-12T23:09:39.526052451Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 23:09:39.565263 containerd[1590]: time="2025-09-12T23:09:39.565216069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4c6kw,Uid:ebf74b46-da1c-4e44-b6b6-77623d9c780f,Namespace:kube-system,Attempt:0,} returns sandbox id \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\"" Sep 12 23:09:39.566048 kubelet[2793]: E0912 23:09:39.566022 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:40.266177 kubelet[2793]: E0912 23:09:40.266122 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:40.266866 containerd[1590]: time="2025-09-12T23:09:40.266540006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqqff,Uid:c5b5956c-54ec-40ce-9260-91993ddd6d4d,Namespace:kube-system,Attempt:0,}" Sep 12 23:09:40.645870 containerd[1590]: time="2025-09-12T23:09:40.645802988Z" level=info msg="connecting to shim fabf328f1e1568853d87e869feff9f3818ff3f3bcf34cc3ae830a189b1ff544e" address="unix:///run/containerd/s/a0d14efa413b26c67cf2a3b622f74f7c921c79b10017103b05512f2b8a68106d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:09:40.707968 systemd[1]: Started cri-containerd-fabf328f1e1568853d87e869feff9f3818ff3f3bcf34cc3ae830a189b1ff544e.scope - libcontainer container fabf328f1e1568853d87e869feff9f3818ff3f3bcf34cc3ae830a189b1ff544e. Sep 12 23:09:40.744895 containerd[1590]: time="2025-09-12T23:09:40.744825054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vqqff,Uid:c5b5956c-54ec-40ce-9260-91993ddd6d4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fabf328f1e1568853d87e869feff9f3818ff3f3bcf34cc3ae830a189b1ff544e\"" Sep 12 23:09:40.745596 kubelet[2793]: E0912 23:09:40.745563 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:40.747493 containerd[1590]: time="2025-09-12T23:09:40.747409640Z" level=info msg="CreateContainer within sandbox \"fabf328f1e1568853d87e869feff9f3818ff3f3bcf34cc3ae830a189b1ff544e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:09:40.761913 containerd[1590]: time="2025-09-12T23:09:40.761867371Z" level=info msg="Container bd9d3f8bfc8c95778de9d5a1e88527c7ccc342aca1c79529114777e41fc630ed: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:09:40.765985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851496464.mount: Deactivated successfully. Sep 12 23:09:40.778313 containerd[1590]: time="2025-09-12T23:09:40.778259435Z" level=info msg="CreateContainer within sandbox \"fabf328f1e1568853d87e869feff9f3818ff3f3bcf34cc3ae830a189b1ff544e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bd9d3f8bfc8c95778de9d5a1e88527c7ccc342aca1c79529114777e41fc630ed\"" Sep 12 23:09:40.778856 containerd[1590]: time="2025-09-12T23:09:40.778832928Z" level=info msg="StartContainer for \"bd9d3f8bfc8c95778de9d5a1e88527c7ccc342aca1c79529114777e41fc630ed\"" Sep 12 23:09:40.780128 containerd[1590]: time="2025-09-12T23:09:40.780106817Z" level=info msg="connecting to shim bd9d3f8bfc8c95778de9d5a1e88527c7ccc342aca1c79529114777e41fc630ed" address="unix:///run/containerd/s/a0d14efa413b26c67cf2a3b622f74f7c921c79b10017103b05512f2b8a68106d" protocol=ttrpc version=3 Sep 12 23:09:40.804027 systemd[1]: Started cri-containerd-bd9d3f8bfc8c95778de9d5a1e88527c7ccc342aca1c79529114777e41fc630ed.scope - libcontainer container bd9d3f8bfc8c95778de9d5a1e88527c7ccc342aca1c79529114777e41fc630ed. Sep 12 23:09:40.852353 containerd[1590]: time="2025-09-12T23:09:40.852307298Z" level=info msg="StartContainer for \"bd9d3f8bfc8c95778de9d5a1e88527c7ccc342aca1c79529114777e41fc630ed\" returns successfully" Sep 12 23:09:41.118399 kubelet[2793]: E0912 23:09:41.118345 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:41.231357 kubelet[2793]: E0912 23:09:41.231319 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:41.231496 kubelet[2793]: E0912 23:09:41.231399 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:41.248947 kubelet[2793]: I0912 23:09:41.248884 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vqqff" podStartSLOduration=3.248842858 podStartE2EDuration="3.248842858s" podCreationTimestamp="2025-09-12 23:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:09:41.248407196 +0000 UTC m=+8.190746308" watchObservedRunningTime="2025-09-12 23:09:41.248842858 +0000 UTC m=+8.191181970" Sep 12 23:09:45.227730 kubelet[2793]: E0912 23:09:45.227681 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:48.487789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1185767617.mount: Deactivated successfully. Sep 12 23:09:52.779355 containerd[1590]: time="2025-09-12T23:09:52.779281461Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:52.797613 containerd[1590]: time="2025-09-12T23:09:52.797535967Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 23:09:52.809131 containerd[1590]: time="2025-09-12T23:09:52.809076038Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:52.810326 containerd[1590]: time="2025-09-12T23:09:52.810284394Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.284185797s" Sep 12 23:09:52.810326 containerd[1590]: time="2025-09-12T23:09:52.810318062Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 23:09:52.811783 containerd[1590]: time="2025-09-12T23:09:52.811525737Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 23:09:52.812923 containerd[1590]: time="2025-09-12T23:09:52.812873043Z" level=info msg="CreateContainer within sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 23:09:52.963965 containerd[1590]: time="2025-09-12T23:09:52.963905217Z" level=info msg="Container c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:09:53.051322 containerd[1590]: time="2025-09-12T23:09:53.051177040Z" level=info msg="CreateContainer within sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5\"" Sep 12 23:09:53.052062 containerd[1590]: time="2025-09-12T23:09:53.052008103Z" level=info msg="StartContainer for \"c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5\"" Sep 12 23:09:53.053550 containerd[1590]: time="2025-09-12T23:09:53.053516528Z" level=info msg="connecting to shim c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5" address="unix:///run/containerd/s/d216ce5f39af932b418587b061bfff33649736736c3ed32cb1c74b62e1df53c1" protocol=ttrpc version=3 Sep 12 23:09:53.078932 systemd[1]: Started cri-containerd-c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5.scope - libcontainer container c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5. Sep 12 23:09:53.128961 systemd[1]: cri-containerd-c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5.scope: Deactivated successfully. Sep 12 23:09:53.129556 systemd[1]: cri-containerd-c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5.scope: Consumed 29ms CPU time, 6.8M memory peak, 4K read from disk, 3.2M written to disk. Sep 12 23:09:53.131650 containerd[1590]: time="2025-09-12T23:09:53.131595734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5\" id:\"c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5\" pid:3221 exited_at:{seconds:1757718593 nanos:130776416}" Sep 12 23:09:53.425499 containerd[1590]: time="2025-09-12T23:09:53.425374005Z" level=info msg="received exit event container_id:\"c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5\" id:\"c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5\" pid:3221 exited_at:{seconds:1757718593 nanos:130776416}" Sep 12 23:09:53.427606 containerd[1590]: time="2025-09-12T23:09:53.427552428Z" level=info msg="StartContainer for \"c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5\" returns successfully" Sep 12 23:09:53.448797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5-rootfs.mount: Deactivated successfully. Sep 12 23:09:53.570581 kubelet[2793]: E0912 23:09:53.570533 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:53.573935 containerd[1590]: time="2025-09-12T23:09:53.573895479Z" level=info msg="CreateContainer within sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 23:09:53.584287 containerd[1590]: time="2025-09-12T23:09:53.584236398Z" level=info msg="Container f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:09:53.592662 containerd[1590]: time="2025-09-12T23:09:53.592601852Z" level=info msg="CreateContainer within sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9\"" Sep 12 23:09:53.594091 containerd[1590]: time="2025-09-12T23:09:53.593343043Z" level=info msg="StartContainer for \"f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9\"" Sep 12 23:09:53.594483 containerd[1590]: time="2025-09-12T23:09:53.594445441Z" level=info msg="connecting to shim f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9" address="unix:///run/containerd/s/d216ce5f39af932b418587b061bfff33649736736c3ed32cb1c74b62e1df53c1" protocol=ttrpc version=3 Sep 12 23:09:53.615895 systemd[1]: Started cri-containerd-f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9.scope - libcontainer container f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9. Sep 12 23:09:53.647889 containerd[1590]: time="2025-09-12T23:09:53.647842139Z" level=info msg="StartContainer for \"f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9\" returns successfully" Sep 12 23:09:53.660796 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:09:53.661480 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:09:53.662003 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:09:53.664075 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:09:53.665126 systemd[1]: cri-containerd-f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9.scope: Deactivated successfully. Sep 12 23:09:53.666604 containerd[1590]: time="2025-09-12T23:09:53.666561137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9\" id:\"f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9\" pid:3267 exited_at:{seconds:1757718593 nanos:665048323}" Sep 12 23:09:53.666675 containerd[1590]: time="2025-09-12T23:09:53.666646258Z" level=info msg="received exit event container_id:\"f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9\" id:\"f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9\" pid:3267 exited_at:{seconds:1757718593 nanos:665048323}" Sep 12 23:09:53.697915 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:09:54.576418 kubelet[2793]: E0912 23:09:54.576179 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:54.580183 containerd[1590]: time="2025-09-12T23:09:54.580138045Z" level=info msg="CreateContainer within sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 23:09:54.592900 containerd[1590]: time="2025-09-12T23:09:54.592834493Z" level=info msg="Container 261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:09:54.604376 containerd[1590]: time="2025-09-12T23:09:54.604324897Z" level=info msg="CreateContainer within sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4\"" Sep 12 23:09:54.604791 containerd[1590]: time="2025-09-12T23:09:54.604754571Z" level=info msg="StartContainer for \"261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4\"" Sep 12 23:09:54.606113 containerd[1590]: time="2025-09-12T23:09:54.606071497Z" level=info msg="connecting to shim 261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4" address="unix:///run/containerd/s/d216ce5f39af932b418587b061bfff33649736736c3ed32cb1c74b62e1df53c1" protocol=ttrpc version=3 Sep 12 23:09:54.631947 systemd[1]: Started cri-containerd-261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4.scope - libcontainer container 261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4. Sep 12 23:09:54.676707 systemd[1]: cri-containerd-261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4.scope: Deactivated successfully. Sep 12 23:09:54.677753 containerd[1590]: time="2025-09-12T23:09:54.677710463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4\" id:\"261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4\" pid:3322 exited_at:{seconds:1757718594 nanos:677440961}" Sep 12 23:09:54.737036 containerd[1590]: time="2025-09-12T23:09:54.736982512Z" level=info msg="received exit event container_id:\"261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4\" id:\"261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4\" pid:3322 exited_at:{seconds:1757718594 nanos:677440961}" Sep 12 23:09:54.749276 containerd[1590]: time="2025-09-12T23:09:54.749234177Z" level=info msg="StartContainer for \"261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4\" returns successfully" Sep 12 23:09:55.015958 containerd[1590]: time="2025-09-12T23:09:55.015900750Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:55.016606 containerd[1590]: time="2025-09-12T23:09:55.016543630Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 23:09:55.017706 containerd[1590]: time="2025-09-12T23:09:55.017664408Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:09:55.019060 containerd[1590]: time="2025-09-12T23:09:55.019001421Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.207438228s" Sep 12 23:09:55.019060 containerd[1590]: time="2025-09-12T23:09:55.019052322Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 23:09:55.021150 containerd[1590]: time="2025-09-12T23:09:55.021115011Z" level=info msg="CreateContainer within sandbox \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 23:09:55.028381 containerd[1590]: time="2025-09-12T23:09:55.028334853Z" level=info msg="Container a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:09:55.035719 containerd[1590]: time="2025-09-12T23:09:55.035676722Z" level=info msg="CreateContainer within sandbox \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\"" Sep 12 23:09:55.036367 containerd[1590]: time="2025-09-12T23:09:55.036180733Z" level=info msg="StartContainer for \"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\"" Sep 12 23:09:55.037140 containerd[1590]: time="2025-09-12T23:09:55.037103443Z" level=info msg="connecting to shim a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a" address="unix:///run/containerd/s/ed19ab74225f37e64b05784425b4d7958d8eb1f2fe875d76f6229e891b2813e0" protocol=ttrpc version=3 Sep 12 23:09:55.064906 systemd[1]: Started cri-containerd-a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a.scope - libcontainer container a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a. Sep 12 23:09:55.100490 containerd[1590]: time="2025-09-12T23:09:55.100445069Z" level=info msg="StartContainer for \"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\" returns successfully" Sep 12 23:09:55.578809 kubelet[2793]: E0912 23:09:55.578062 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:55.582108 kubelet[2793]: E0912 23:09:55.582073 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:55.583484 containerd[1590]: time="2025-09-12T23:09:55.583451325Z" level=info msg="CreateContainer within sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 23:09:55.595523 kubelet[2793]: I0912 23:09:55.595446 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4c6kw" podStartSLOduration=1.142235597 podStartE2EDuration="16.595424273s" podCreationTimestamp="2025-09-12 23:09:39 +0000 UTC" firstStartedPulling="2025-09-12 23:09:39.566543721 +0000 UTC m=+6.508882833" lastFinishedPulling="2025-09-12 23:09:55.019732397 +0000 UTC m=+21.962071509" observedRunningTime="2025-09-12 23:09:55.594998509 +0000 UTC m=+22.537337621" watchObservedRunningTime="2025-09-12 23:09:55.595424273 +0000 UTC m=+22.537763385" Sep 12 23:09:55.599808 containerd[1590]: time="2025-09-12T23:09:55.599073784Z" level=info msg="Container a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:09:55.614115 containerd[1590]: time="2025-09-12T23:09:55.614065648Z" level=info msg="CreateContainer within sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a\"" Sep 12 23:09:55.616786 containerd[1590]: time="2025-09-12T23:09:55.616626154Z" level=info msg="StartContainer for \"a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a\"" Sep 12 23:09:55.618708 containerd[1590]: time="2025-09-12T23:09:55.618655085Z" level=info msg="connecting to shim a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a" address="unix:///run/containerd/s/d216ce5f39af932b418587b061bfff33649736736c3ed32cb1c74b62e1df53c1" protocol=ttrpc version=3 Sep 12 23:09:55.663999 systemd[1]: Started cri-containerd-a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a.scope - libcontainer container a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a. Sep 12 23:09:55.699218 systemd[1]: cri-containerd-a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a.scope: Deactivated successfully. Sep 12 23:09:55.700421 containerd[1590]: time="2025-09-12T23:09:55.700382923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a\" id:\"a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a\" pid:3407 exited_at:{seconds:1757718595 nanos:699389491}" Sep 12 23:09:55.702523 containerd[1590]: time="2025-09-12T23:09:55.702008104Z" level=info msg="received exit event container_id:\"a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a\" id:\"a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a\" pid:3407 exited_at:{seconds:1757718595 nanos:699389491}" Sep 12 23:09:55.704876 containerd[1590]: time="2025-09-12T23:09:55.704849063Z" level=info msg="StartContainer for \"a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a\" returns successfully" Sep 12 23:09:56.587667 kubelet[2793]: E0912 23:09:56.587627 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:56.588332 kubelet[2793]: E0912 23:09:56.587952 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:56.592090 containerd[1590]: time="2025-09-12T23:09:56.592043674Z" level=info msg="CreateContainer within sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 23:09:56.609430 containerd[1590]: time="2025-09-12T23:09:56.609377107Z" level=info msg="Container 302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:09:56.613883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175472064.mount: Deactivated successfully. Sep 12 23:09:56.618990 containerd[1590]: time="2025-09-12T23:09:56.618942938Z" level=info msg="CreateContainer within sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\"" Sep 12 23:09:56.619505 containerd[1590]: time="2025-09-12T23:09:56.619478602Z" level=info msg="StartContainer for \"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\"" Sep 12 23:09:56.620332 containerd[1590]: time="2025-09-12T23:09:56.620307322Z" level=info msg="connecting to shim 302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1" address="unix:///run/containerd/s/d216ce5f39af932b418587b061bfff33649736736c3ed32cb1c74b62e1df53c1" protocol=ttrpc version=3 Sep 12 23:09:56.640897 systemd[1]: Started cri-containerd-302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1.scope - libcontainer container 302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1. Sep 12 23:09:56.688838 containerd[1590]: time="2025-09-12T23:09:56.688795899Z" level=info msg="StartContainer for \"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" returns successfully" Sep 12 23:09:56.772066 containerd[1590]: time="2025-09-12T23:09:56.771922517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" id:\"32b135a404c0e0498f5a2ababe7c09475c9c4d78a3edea948f9c2850aa319c6d\" pid:3473 exited_at:{seconds:1757718596 nanos:771093588}" Sep 12 23:09:56.784123 kubelet[2793]: I0912 23:09:56.784093 2793 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 23:09:56.818708 systemd[1]: Created slice kubepods-burstable-pod5631c6c5_f4b6_4a9f_ae75_0fe99bba2b77.slice - libcontainer container kubepods-burstable-pod5631c6c5_f4b6_4a9f_ae75_0fe99bba2b77.slice. Sep 12 23:09:56.827096 systemd[1]: Created slice kubepods-burstable-poded4d4f27_cfbe_4a70_8eb2_3f84fe794720.slice - libcontainer container kubepods-burstable-poded4d4f27_cfbe_4a70_8eb2_3f84fe794720.slice. Sep 12 23:09:56.867226 kubelet[2793]: I0912 23:09:56.867064 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5631c6c5-f4b6-4a9f-ae75-0fe99bba2b77-config-volume\") pod \"coredns-668d6bf9bc-74cg8\" (UID: \"5631c6c5-f4b6-4a9f-ae75-0fe99bba2b77\") " pod="kube-system/coredns-668d6bf9bc-74cg8" Sep 12 23:09:56.867226 kubelet[2793]: I0912 23:09:56.867127 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z445\" (UniqueName: \"kubernetes.io/projected/ed4d4f27-cfbe-4a70-8eb2-3f84fe794720-kube-api-access-7z445\") pod \"coredns-668d6bf9bc-njc7q\" (UID: \"ed4d4f27-cfbe-4a70-8eb2-3f84fe794720\") " pod="kube-system/coredns-668d6bf9bc-njc7q" Sep 12 23:09:56.867226 kubelet[2793]: I0912 23:09:56.867156 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed4d4f27-cfbe-4a70-8eb2-3f84fe794720-config-volume\") pod \"coredns-668d6bf9bc-njc7q\" (UID: \"ed4d4f27-cfbe-4a70-8eb2-3f84fe794720\") " pod="kube-system/coredns-668d6bf9bc-njc7q" Sep 12 23:09:56.867226 kubelet[2793]: I0912 23:09:56.867171 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrls\" (UniqueName: \"kubernetes.io/projected/5631c6c5-f4b6-4a9f-ae75-0fe99bba2b77-kube-api-access-tcrls\") pod \"coredns-668d6bf9bc-74cg8\" (UID: \"5631c6c5-f4b6-4a9f-ae75-0fe99bba2b77\") " pod="kube-system/coredns-668d6bf9bc-74cg8" Sep 12 23:09:57.122310 kubelet[2793]: E0912 23:09:57.122044 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:57.123553 containerd[1590]: time="2025-09-12T23:09:57.123500455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-74cg8,Uid:5631c6c5-f4b6-4a9f-ae75-0fe99bba2b77,Namespace:kube-system,Attempt:0,}" Sep 12 23:09:57.130899 kubelet[2793]: E0912 23:09:57.130856 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:57.131683 containerd[1590]: time="2025-09-12T23:09:57.131565409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-njc7q,Uid:ed4d4f27-cfbe-4a70-8eb2-3f84fe794720,Namespace:kube-system,Attempt:0,}" Sep 12 23:09:57.596273 kubelet[2793]: E0912 23:09:57.596221 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:58.598495 kubelet[2793]: E0912 23:09:58.598436 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:58.994171 systemd-networkd[1480]: cilium_host: Link UP Sep 12 23:09:58.994382 systemd-networkd[1480]: cilium_net: Link UP Sep 12 23:09:58.994584 systemd-networkd[1480]: cilium_net: Gained carrier Sep 12 23:09:58.994793 systemd-networkd[1480]: cilium_host: Gained carrier Sep 12 23:09:59.125748 systemd-networkd[1480]: cilium_vxlan: Link UP Sep 12 23:09:59.125932 systemd-networkd[1480]: cilium_vxlan: Gained carrier Sep 12 23:09:59.296036 systemd-networkd[1480]: cilium_host: Gained IPv6LL Sep 12 23:09:59.373836 kernel: NET: Registered PF_ALG protocol family Sep 12 23:09:59.600944 kubelet[2793]: E0912 23:09:59.600904 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:09:59.720034 systemd-networkd[1480]: cilium_net: Gained IPv6LL Sep 12 23:10:00.177518 systemd-networkd[1480]: lxc_health: Link UP Sep 12 23:10:00.179710 systemd-networkd[1480]: lxc_health: Gained carrier Sep 12 23:10:00.687801 kernel: eth0: renamed from tmpd5a06 Sep 12 23:10:00.691271 systemd-networkd[1480]: lxc74b26068acf9: Link UP Sep 12 23:10:00.701793 kernel: eth0: renamed from tmpbf370 Sep 12 23:10:00.704904 systemd-networkd[1480]: lxc74b26068acf9: Gained carrier Sep 12 23:10:00.705697 systemd-networkd[1480]: lxc072997b50785: Link UP Sep 12 23:10:00.706866 systemd-networkd[1480]: lxc072997b50785: Gained carrier Sep 12 23:10:00.937170 systemd-networkd[1480]: cilium_vxlan: Gained IPv6LL Sep 12 23:10:01.407634 kubelet[2793]: E0912 23:10:01.407588 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:01.436980 kubelet[2793]: I0912 23:10:01.436825 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7pnrh" podStartSLOduration=10.1507205 podStartE2EDuration="23.43680688s" podCreationTimestamp="2025-09-12 23:09:38 +0000 UTC" firstStartedPulling="2025-09-12 23:09:39.525229704 +0000 UTC m=+6.467568816" lastFinishedPulling="2025-09-12 23:09:52.811316084 +0000 UTC m=+19.753655196" observedRunningTime="2025-09-12 23:09:57.612094914 +0000 UTC m=+24.554434036" watchObservedRunningTime="2025-09-12 23:10:01.43680688 +0000 UTC m=+28.379146022" Sep 12 23:10:01.605321 kubelet[2793]: E0912 23:10:01.605272 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:01.773425 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:34686.service - OpenSSH per-connection server daemon (10.0.0.1:34686). Sep 12 23:10:01.837519 sshd[3944]: Accepted publickey for core from 10.0.0.1 port 34686 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:01.839884 sshd-session[3944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:01.846824 systemd-logind[1573]: New session 10 of user core. Sep 12 23:10:01.854911 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:10:02.003075 sshd[3949]: Connection closed by 10.0.0.1 port 34686 Sep 12 23:10:02.005460 sshd-session[3944]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:02.010182 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:34686.service: Deactivated successfully. Sep 12 23:10:02.012628 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:10:02.013523 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:10:02.015268 systemd-logind[1573]: Removed session 10. Sep 12 23:10:02.025985 systemd-networkd[1480]: lxc072997b50785: Gained IPv6LL Sep 12 23:10:02.087964 systemd-networkd[1480]: lxc_health: Gained IPv6LL Sep 12 23:10:02.607158 kubelet[2793]: E0912 23:10:02.607114 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:02.664978 systemd-networkd[1480]: lxc74b26068acf9: Gained IPv6LL Sep 12 23:10:04.990546 containerd[1590]: time="2025-09-12T23:10:04.990506099Z" level=info msg="connecting to shim d5a060cbd9476d3f93ff0386cd56cbf827ab53389e978527c14d234182bd0f84" address="unix:///run/containerd/s/45827250e4fc8ca6b8e2ff58e97b14c7a203aca23f7eaad787a1527b7dc3f489" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:10:05.026056 systemd[1]: Started cri-containerd-d5a060cbd9476d3f93ff0386cd56cbf827ab53389e978527c14d234182bd0f84.scope - libcontainer container d5a060cbd9476d3f93ff0386cd56cbf827ab53389e978527c14d234182bd0f84. Sep 12 23:10:05.026462 containerd[1590]: time="2025-09-12T23:10:05.026406840Z" level=info msg="connecting to shim bf37045c4b58d6cc8af837b95f490cfc7ea0c18b2c4e942c8cfa920899031ac2" address="unix:///run/containerd/s/7e04f9d50c60fcac739ebe3d77907ca939b30f51153cf6172d84608a9f9a7500" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:10:05.050742 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:10:05.064074 systemd[1]: Started cri-containerd-bf37045c4b58d6cc8af837b95f490cfc7ea0c18b2c4e942c8cfa920899031ac2.scope - libcontainer container bf37045c4b58d6cc8af837b95f490cfc7ea0c18b2c4e942c8cfa920899031ac2. Sep 12 23:10:05.082183 systemd-resolved[1403]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:10:05.100736 containerd[1590]: time="2025-09-12T23:10:05.100673805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-74cg8,Uid:5631c6c5-f4b6-4a9f-ae75-0fe99bba2b77,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5a060cbd9476d3f93ff0386cd56cbf827ab53389e978527c14d234182bd0f84\"" Sep 12 23:10:05.101797 kubelet[2793]: E0912 23:10:05.101620 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:05.117122 containerd[1590]: time="2025-09-12T23:10:05.117017529Z" level=info msg="CreateContainer within sandbox \"d5a060cbd9476d3f93ff0386cd56cbf827ab53389e978527c14d234182bd0f84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:10:05.133894 containerd[1590]: time="2025-09-12T23:10:05.133842448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-njc7q,Uid:ed4d4f27-cfbe-4a70-8eb2-3f84fe794720,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf37045c4b58d6cc8af837b95f490cfc7ea0c18b2c4e942c8cfa920899031ac2\"" Sep 12 23:10:05.134875 kubelet[2793]: E0912 23:10:05.134823 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:05.142252 containerd[1590]: time="2025-09-12T23:10:05.142192439Z" level=info msg="Container 1b9ea7b8449c6ab7c17512e8ab4f37b1cf34606761555cb1a6240cc491b6434a: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:10:05.151791 containerd[1590]: time="2025-09-12T23:10:05.151732342Z" level=info msg="CreateContainer within sandbox \"bf37045c4b58d6cc8af837b95f490cfc7ea0c18b2c4e942c8cfa920899031ac2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:10:05.154447 containerd[1590]: time="2025-09-12T23:10:05.154402641Z" level=info msg="CreateContainer within sandbox \"d5a060cbd9476d3f93ff0386cd56cbf827ab53389e978527c14d234182bd0f84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1b9ea7b8449c6ab7c17512e8ab4f37b1cf34606761555cb1a6240cc491b6434a\"" Sep 12 23:10:05.155377 containerd[1590]: time="2025-09-12T23:10:05.155301454Z" level=info msg="StartContainer for \"1b9ea7b8449c6ab7c17512e8ab4f37b1cf34606761555cb1a6240cc491b6434a\"" Sep 12 23:10:05.157350 containerd[1590]: time="2025-09-12T23:10:05.157298948Z" level=info msg="connecting to shim 1b9ea7b8449c6ab7c17512e8ab4f37b1cf34606761555cb1a6240cc491b6434a" address="unix:///run/containerd/s/45827250e4fc8ca6b8e2ff58e97b14c7a203aca23f7eaad787a1527b7dc3f489" protocol=ttrpc version=3 Sep 12 23:10:05.164989 containerd[1590]: time="2025-09-12T23:10:05.164956144Z" level=info msg="Container ddf795e6ba4942c8912ccd3a5ad469ff766f9f99662b985ef6b944ed2a1669a3: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:10:05.174068 containerd[1590]: time="2025-09-12T23:10:05.174026044Z" level=info msg="CreateContainer within sandbox \"bf37045c4b58d6cc8af837b95f490cfc7ea0c18b2c4e942c8cfa920899031ac2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddf795e6ba4942c8912ccd3a5ad469ff766f9f99662b985ef6b944ed2a1669a3\"" Sep 12 23:10:05.175543 containerd[1590]: time="2025-09-12T23:10:05.175315171Z" level=info msg="StartContainer for \"ddf795e6ba4942c8912ccd3a5ad469ff766f9f99662b985ef6b944ed2a1669a3\"" Sep 12 23:10:05.178663 containerd[1590]: time="2025-09-12T23:10:05.178626593Z" level=info msg="connecting to shim ddf795e6ba4942c8912ccd3a5ad469ff766f9f99662b985ef6b944ed2a1669a3" address="unix:///run/containerd/s/7e04f9d50c60fcac739ebe3d77907ca939b30f51153cf6172d84608a9f9a7500" protocol=ttrpc version=3 Sep 12 23:10:05.185951 systemd[1]: Started cri-containerd-1b9ea7b8449c6ab7c17512e8ab4f37b1cf34606761555cb1a6240cc491b6434a.scope - libcontainer container 1b9ea7b8449c6ab7c17512e8ab4f37b1cf34606761555cb1a6240cc491b6434a. Sep 12 23:10:05.206931 systemd[1]: Started cri-containerd-ddf795e6ba4942c8912ccd3a5ad469ff766f9f99662b985ef6b944ed2a1669a3.scope - libcontainer container ddf795e6ba4942c8912ccd3a5ad469ff766f9f99662b985ef6b944ed2a1669a3. Sep 12 23:10:05.241625 containerd[1590]: time="2025-09-12T23:10:05.241378034Z" level=info msg="StartContainer for \"1b9ea7b8449c6ab7c17512e8ab4f37b1cf34606761555cb1a6240cc491b6434a\" returns successfully" Sep 12 23:10:05.257515 containerd[1590]: time="2025-09-12T23:10:05.257470509Z" level=info msg="StartContainer for \"ddf795e6ba4942c8912ccd3a5ad469ff766f9f99662b985ef6b944ed2a1669a3\" returns successfully" Sep 12 23:10:05.640889 kubelet[2793]: E0912 23:10:05.638734 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:05.652900 kubelet[2793]: E0912 23:10:05.650706 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:05.765582 kubelet[2793]: I0912 23:10:05.765443 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-njc7q" podStartSLOduration=26.765284495 podStartE2EDuration="26.765284495s" podCreationTimestamp="2025-09-12 23:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:10:05.760851648 +0000 UTC m=+32.703190790" watchObservedRunningTime="2025-09-12 23:10:05.765284495 +0000 UTC m=+32.707623607" Sep 12 23:10:05.905177 kubelet[2793]: I0912 23:10:05.898525 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-74cg8" podStartSLOduration=26.898492799 podStartE2EDuration="26.898492799s" podCreationTimestamp="2025-09-12 23:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:10:05.888230994 +0000 UTC m=+32.830570106" watchObservedRunningTime="2025-09-12 23:10:05.898492799 +0000 UTC m=+32.840831911" Sep 12 23:10:05.988745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134279045.mount: Deactivated successfully. Sep 12 23:10:06.653007 kubelet[2793]: E0912 23:10:06.652968 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:06.653484 kubelet[2793]: E0912 23:10:06.653138 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:07.022065 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:34688.service - OpenSSH per-connection server daemon (10.0.0.1:34688). Sep 12 23:10:07.076204 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 34688 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:07.077860 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:07.084029 systemd-logind[1573]: New session 11 of user core. Sep 12 23:10:07.094044 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:10:07.234274 sshd[4144]: Connection closed by 10.0.0.1 port 34688 Sep 12 23:10:07.234649 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:07.240857 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:34688.service: Deactivated successfully. Sep 12 23:10:07.243619 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:10:07.244625 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:10:07.246592 systemd-logind[1573]: Removed session 11. Sep 12 23:10:07.654559 kubelet[2793]: E0912 23:10:07.654284 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:07.654559 kubelet[2793]: E0912 23:10:07.654340 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:12.247966 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:48692.service - OpenSSH per-connection server daemon (10.0.0.1:48692). Sep 12 23:10:12.299961 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 48692 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:12.301651 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:12.306249 systemd-logind[1573]: New session 12 of user core. Sep 12 23:10:12.316914 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:10:12.430596 sshd[4165]: Connection closed by 10.0.0.1 port 48692 Sep 12 23:10:12.430950 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:12.435562 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:48692.service: Deactivated successfully. Sep 12 23:10:12.438299 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:10:12.439302 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:10:12.441076 systemd-logind[1573]: Removed session 12. Sep 12 23:10:17.461607 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:48702.service - OpenSSH per-connection server daemon (10.0.0.1:48702). Sep 12 23:10:17.557914 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 48702 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:17.560192 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:17.584486 systemd-logind[1573]: New session 13 of user core. Sep 12 23:10:17.596189 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:10:17.730859 sshd[4183]: Connection closed by 10.0.0.1 port 48702 Sep 12 23:10:17.731035 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:17.736234 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:48702.service: Deactivated successfully. Sep 12 23:10:17.739112 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:10:17.741537 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:10:17.743343 systemd-logind[1573]: Removed session 13. Sep 12 23:10:22.743876 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:39750.service - OpenSSH per-connection server daemon (10.0.0.1:39750). Sep 12 23:10:22.827095 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 39750 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:22.829344 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:22.835880 systemd-logind[1573]: New session 14 of user core. Sep 12 23:10:22.845991 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:10:22.973355 sshd[4200]: Connection closed by 10.0.0.1 port 39750 Sep 12 23:10:22.973810 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:22.987181 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:39750.service: Deactivated successfully. Sep 12 23:10:22.990110 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:10:22.991162 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:10:22.995733 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:39766.service - OpenSSH per-connection server daemon (10.0.0.1:39766). Sep 12 23:10:22.996578 systemd-logind[1573]: Removed session 14. Sep 12 23:10:23.055368 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 39766 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:23.057594 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:23.063349 systemd-logind[1573]: New session 15 of user core. Sep 12 23:10:23.081087 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:10:23.262173 sshd[4218]: Connection closed by 10.0.0.1 port 39766 Sep 12 23:10:23.264228 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:23.274448 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:39766.service: Deactivated successfully. Sep 12 23:10:23.277394 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:10:23.280052 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:10:23.285376 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:39770.service - OpenSSH per-connection server daemon (10.0.0.1:39770). Sep 12 23:10:23.286959 systemd-logind[1573]: Removed session 15. Sep 12 23:10:23.348919 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 39770 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:23.351084 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:23.358131 systemd-logind[1573]: New session 16 of user core. Sep 12 23:10:23.367985 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:10:23.497561 sshd[4233]: Connection closed by 10.0.0.1 port 39770 Sep 12 23:10:23.498085 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:23.502520 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:39770.service: Deactivated successfully. Sep 12 23:10:23.505125 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:10:23.506971 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:10:23.508617 systemd-logind[1573]: Removed session 16. Sep 12 23:10:28.556070 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:39774.service - OpenSSH per-connection server daemon (10.0.0.1:39774). Sep 12 23:10:28.634045 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 39774 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:28.636278 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:28.644633 systemd-logind[1573]: New session 17 of user core. Sep 12 23:10:28.660134 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:10:28.851940 sshd[4250]: Connection closed by 10.0.0.1 port 39774 Sep 12 23:10:28.853498 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:28.861407 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:39774.service: Deactivated successfully. Sep 12 23:10:28.864587 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:10:28.868560 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:10:28.873116 systemd-logind[1573]: Removed session 17. Sep 12 23:10:33.879184 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:46298.service - OpenSSH per-connection server daemon (10.0.0.1:46298). Sep 12 23:10:33.971576 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 46298 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:33.972311 sshd-session[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:33.979794 systemd-logind[1573]: New session 18 of user core. Sep 12 23:10:33.991077 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:10:34.195608 sshd[4268]: Connection closed by 10.0.0.1 port 46298 Sep 12 23:10:34.196301 sshd-session[4265]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:34.201497 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:46298.service: Deactivated successfully. Sep 12 23:10:34.207161 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:10:34.213049 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:10:34.216701 systemd-logind[1573]: Removed session 18. Sep 12 23:10:37.643528 kernel: hrtimer: interrupt took 18593124 ns Sep 12 23:10:39.243343 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:46300.service - OpenSSH per-connection server daemon (10.0.0.1:46300). Sep 12 23:10:39.386876 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 46300 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:39.393973 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:39.403795 systemd-logind[1573]: New session 19 of user core. Sep 12 23:10:39.421204 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:10:39.672649 sshd[4284]: Connection closed by 10.0.0.1 port 46300 Sep 12 23:10:39.672492 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:39.683605 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:46300.service: Deactivated successfully. Sep 12 23:10:39.701227 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:10:39.705289 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:10:39.711782 systemd-logind[1573]: Removed session 19. Sep 12 23:10:44.697097 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:52270.service - OpenSSH per-connection server daemon (10.0.0.1:52270). Sep 12 23:10:44.777229 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 52270 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:44.779498 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:44.795282 systemd-logind[1573]: New session 20 of user core. Sep 12 23:10:44.803092 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:10:45.014860 sshd[4302]: Connection closed by 10.0.0.1 port 52270 Sep 12 23:10:45.015920 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:45.032313 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:52270.service: Deactivated successfully. Sep 12 23:10:45.040910 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:10:45.046872 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:10:45.057472 systemd[1]: Started sshd@20-10.0.0.144:22-10.0.0.1:52286.service - OpenSSH per-connection server daemon (10.0.0.1:52286). Sep 12 23:10:45.058828 systemd-logind[1573]: Removed session 20. Sep 12 23:10:45.142628 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 52286 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:45.144666 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:45.152740 systemd-logind[1573]: New session 21 of user core. Sep 12 23:10:45.162160 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 23:10:46.219987 sshd[4319]: Connection closed by 10.0.0.1 port 52286 Sep 12 23:10:46.230064 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:46.244667 systemd[1]: sshd@20-10.0.0.144:22-10.0.0.1:52286.service: Deactivated successfully. Sep 12 23:10:46.252211 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 23:10:46.260086 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. Sep 12 23:10:46.270301 systemd[1]: Started sshd@21-10.0.0.144:22-10.0.0.1:52292.service - OpenSSH per-connection server daemon (10.0.0.1:52292). Sep 12 23:10:46.275799 systemd-logind[1573]: Removed session 21. Sep 12 23:10:46.397048 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 52292 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:46.398976 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:46.417258 systemd-logind[1573]: New session 22 of user core. Sep 12 23:10:46.430256 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 23:10:47.488341 sshd[4333]: Connection closed by 10.0.0.1 port 52292 Sep 12 23:10:47.489963 sshd-session[4330]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:47.516686 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:52292.service: Deactivated successfully. Sep 12 23:10:47.520914 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 23:10:47.537442 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. Sep 12 23:10:47.562440 systemd[1]: Started sshd@22-10.0.0.144:22-10.0.0.1:52308.service - OpenSSH per-connection server daemon (10.0.0.1:52308). Sep 12 23:10:47.588867 systemd-logind[1573]: Removed session 22. Sep 12 23:10:47.743076 sshd[4357]: Accepted publickey for core from 10.0.0.1 port 52308 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:47.746184 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:47.755365 systemd-logind[1573]: New session 23 of user core. Sep 12 23:10:47.771298 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 23:10:48.404437 sshd[4360]: Connection closed by 10.0.0.1 port 52308 Sep 12 23:10:48.401356 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:48.426371 systemd[1]: sshd@22-10.0.0.144:22-10.0.0.1:52308.service: Deactivated successfully. Sep 12 23:10:48.441778 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 23:10:48.448750 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. Sep 12 23:10:48.454652 systemd-logind[1573]: Removed session 23. Sep 12 23:10:48.458858 systemd[1]: Started sshd@23-10.0.0.144:22-10.0.0.1:52310.service - OpenSSH per-connection server daemon (10.0.0.1:52310). Sep 12 23:10:48.600204 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 52310 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:48.601182 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:48.617471 systemd-logind[1573]: New session 24 of user core. Sep 12 23:10:48.628556 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 23:10:48.910980 sshd[4375]: Connection closed by 10.0.0.1 port 52310 Sep 12 23:10:48.910374 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:48.916254 systemd[1]: sshd@23-10.0.0.144:22-10.0.0.1:52310.service: Deactivated successfully. Sep 12 23:10:48.921243 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 23:10:48.927415 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. Sep 12 23:10:48.935215 systemd-logind[1573]: Removed session 24. Sep 12 23:10:53.938050 systemd[1]: Started sshd@24-10.0.0.144:22-10.0.0.1:58804.service - OpenSSH per-connection server daemon (10.0.0.1:58804). Sep 12 23:10:54.045745 sshd[4389]: Accepted publickey for core from 10.0.0.1 port 58804 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:54.048170 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:54.063801 systemd-logind[1573]: New session 25 of user core. Sep 12 23:10:54.076178 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 23:10:54.282801 sshd[4392]: Connection closed by 10.0.0.1 port 58804 Sep 12 23:10:54.285431 sshd-session[4389]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:54.300585 systemd[1]: sshd@24-10.0.0.144:22-10.0.0.1:58804.service: Deactivated successfully. Sep 12 23:10:54.305948 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 23:10:54.311232 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. Sep 12 23:10:54.320346 systemd-logind[1573]: Removed session 25. Sep 12 23:10:56.212863 kubelet[2793]: E0912 23:10:56.212029 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:59.207495 kubelet[2793]: E0912 23:10:59.204440 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:10:59.324741 systemd[1]: Started sshd@25-10.0.0.144:22-10.0.0.1:58818.service - OpenSSH per-connection server daemon (10.0.0.1:58818). Sep 12 23:10:59.474831 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 58818 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:10:59.476938 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:10:59.496854 systemd-logind[1573]: New session 26 of user core. Sep 12 23:10:59.516436 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 23:10:59.806315 sshd[4409]: Connection closed by 10.0.0.1 port 58818 Sep 12 23:10:59.807241 sshd-session[4406]: pam_unix(sshd:session): session closed for user core Sep 12 23:10:59.814493 systemd[1]: sshd@25-10.0.0.144:22-10.0.0.1:58818.service: Deactivated successfully. Sep 12 23:10:59.817720 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 23:10:59.821502 systemd-logind[1573]: Session 26 logged out. Waiting for processes to exit. Sep 12 23:10:59.823484 systemd-logind[1573]: Removed session 26. Sep 12 23:11:00.204434 kubelet[2793]: E0912 23:11:00.204352 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:01.211411 kubelet[2793]: E0912 23:11:01.205282 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:04.852796 systemd[1]: Started sshd@26-10.0.0.144:22-10.0.0.1:35172.service - OpenSSH per-connection server daemon (10.0.0.1:35172). Sep 12 23:11:04.958672 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 35172 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:11:04.960834 sshd-session[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:11:04.981606 systemd-logind[1573]: New session 27 of user core. Sep 12 23:11:04.991084 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 23:11:05.232089 sshd[4427]: Connection closed by 10.0.0.1 port 35172 Sep 12 23:11:05.231567 sshd-session[4424]: pam_unix(sshd:session): session closed for user core Sep 12 23:11:05.247005 systemd[1]: sshd@26-10.0.0.144:22-10.0.0.1:35172.service: Deactivated successfully. Sep 12 23:11:05.252786 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 23:11:05.257867 systemd-logind[1573]: Session 27 logged out. Waiting for processes to exit. Sep 12 23:11:05.260739 systemd-logind[1573]: Removed session 27. Sep 12 23:11:06.204098 kubelet[2793]: E0912 23:11:06.204029 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:10.248064 systemd[1]: Started sshd@27-10.0.0.144:22-10.0.0.1:32932.service - OpenSSH per-connection server daemon (10.0.0.1:32932). Sep 12 23:11:10.350523 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 32932 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:11:10.351269 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:11:10.366153 systemd-logind[1573]: New session 28 of user core. Sep 12 23:11:10.386872 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 23:11:10.584942 sshd[4443]: Connection closed by 10.0.0.1 port 32932 Sep 12 23:11:10.586626 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Sep 12 23:11:10.594832 systemd[1]: sshd@27-10.0.0.144:22-10.0.0.1:32932.service: Deactivated successfully. Sep 12 23:11:10.598328 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 23:11:10.600248 systemd-logind[1573]: Session 28 logged out. Waiting for processes to exit. Sep 12 23:11:10.605735 systemd-logind[1573]: Removed session 28. Sep 12 23:11:15.627339 systemd[1]: Started sshd@28-10.0.0.144:22-10.0.0.1:32946.service - OpenSSH per-connection server daemon (10.0.0.1:32946). Sep 12 23:11:15.720401 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 32946 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:11:15.722638 sshd-session[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:11:15.732584 systemd-logind[1573]: New session 29 of user core. Sep 12 23:11:15.743182 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 23:11:15.943436 sshd[4462]: Connection closed by 10.0.0.1 port 32946 Sep 12 23:11:15.943627 sshd-session[4459]: pam_unix(sshd:session): session closed for user core Sep 12 23:11:15.958732 systemd[1]: sshd@28-10.0.0.144:22-10.0.0.1:32946.service: Deactivated successfully. Sep 12 23:11:15.962924 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 23:11:15.965976 systemd-logind[1573]: Session 29 logged out. Waiting for processes to exit. Sep 12 23:11:15.970154 systemd[1]: Started sshd@29-10.0.0.144:22-10.0.0.1:32958.service - OpenSSH per-connection server daemon (10.0.0.1:32958). Sep 12 23:11:15.972865 systemd-logind[1573]: Removed session 29. Sep 12 23:11:16.061910 sshd[4475]: Accepted publickey for core from 10.0.0.1 port 32958 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:11:16.063338 sshd-session[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:11:16.094591 systemd-logind[1573]: New session 30 of user core. Sep 12 23:11:16.105123 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 23:11:17.639705 containerd[1590]: time="2025-09-12T23:11:17.639448174Z" level=info msg="StopContainer for \"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\" with timeout 30 (s)" Sep 12 23:11:17.654040 containerd[1590]: time="2025-09-12T23:11:17.653932196Z" level=info msg="Stop container \"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\" with signal terminated" Sep 12 23:11:17.694616 systemd[1]: cri-containerd-a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a.scope: Deactivated successfully. Sep 12 23:11:17.696717 containerd[1590]: time="2025-09-12T23:11:17.696279920Z" level=info msg="received exit event container_id:\"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\" id:\"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\" pid:3371 exited_at:{seconds:1757718677 nanos:695871279}" Sep 12 23:11:17.699054 containerd[1590]: time="2025-09-12T23:11:17.699000005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\" id:\"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\" pid:3371 exited_at:{seconds:1757718677 nanos:695871279}" Sep 12 23:11:17.732268 containerd[1590]: time="2025-09-12T23:11:17.730568864Z" level=info msg="TaskExit event in podsandbox handler container_id:\"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" id:\"1436aa21f97a9ce894c2137624895134ef850eafac95992be4b4a871f600e938\" pid:4499 exited_at:{seconds:1757718677 nanos:729211742}" Sep 12 23:11:17.737041 containerd[1590]: time="2025-09-12T23:11:17.734035602Z" level=info msg="StopContainer for \"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" with timeout 2 (s)" Sep 12 23:11:17.737041 containerd[1590]: time="2025-09-12T23:11:17.734416076Z" level=info msg="Stop container \"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" with signal terminated" Sep 12 23:11:17.747482 containerd[1590]: time="2025-09-12T23:11:17.747404560Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:11:17.762581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a-rootfs.mount: Deactivated successfully. Sep 12 23:11:17.762964 systemd-networkd[1480]: lxc_health: Link DOWN Sep 12 23:11:17.762978 systemd-networkd[1480]: lxc_health: Lost carrier Sep 12 23:11:17.803922 systemd[1]: cri-containerd-302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1.scope: Deactivated successfully. Sep 12 23:11:17.804492 systemd[1]: cri-containerd-302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1.scope: Consumed 8.025s CPU time, 125.5M memory peak, 232K read from disk, 13.3M written to disk. Sep 12 23:11:17.814355 containerd[1590]: time="2025-09-12T23:11:17.814253668Z" level=info msg="TaskExit event in podsandbox handler container_id:\"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" id:\"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" pid:3444 exited_at:{seconds:1757718677 nanos:813904784}" Sep 12 23:11:17.869392 containerd[1590]: time="2025-09-12T23:11:17.814483342Z" level=info msg="received exit event container_id:\"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" id:\"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" pid:3444 exited_at:{seconds:1757718677 nanos:813904784}" Sep 12 23:11:17.881688 containerd[1590]: time="2025-09-12T23:11:17.880135156Z" level=info msg="StopContainer for \"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\" returns successfully" Sep 12 23:11:17.884569 containerd[1590]: time="2025-09-12T23:11:17.884192846Z" level=info msg="StopPodSandbox for \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\"" Sep 12 23:11:17.884569 containerd[1590]: time="2025-09-12T23:11:17.884320162Z" level=info msg="Container to stop \"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:11:17.914403 systemd[1]: cri-containerd-73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37.scope: Deactivated successfully. Sep 12 23:11:17.925844 containerd[1590]: time="2025-09-12T23:11:17.920508877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" id:\"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" pid:2959 exit_status:137 exited_at:{seconds:1757718677 nanos:919327203}" Sep 12 23:11:17.940715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1-rootfs.mount: Deactivated successfully. Sep 12 23:11:18.019025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37-rootfs.mount: Deactivated successfully. Sep 12 23:11:18.084691 containerd[1590]: time="2025-09-12T23:11:18.083023575Z" level=info msg="StopContainer for \"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" returns successfully" Sep 12 23:11:18.084691 containerd[1590]: time="2025-09-12T23:11:18.083749830Z" level=info msg="StopPodSandbox for \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\"" Sep 12 23:11:18.084691 containerd[1590]: time="2025-09-12T23:11:18.083848880Z" level=info msg="Container to stop \"a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:11:18.084691 containerd[1590]: time="2025-09-12T23:11:18.083871254Z" level=info msg="Container to stop \"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:11:18.084691 containerd[1590]: time="2025-09-12T23:11:18.083889500Z" level=info msg="Container to stop \"c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:11:18.084691 containerd[1590]: time="2025-09-12T23:11:18.083900681Z" level=info msg="Container to stop \"f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:11:18.084691 containerd[1590]: time="2025-09-12T23:11:18.083910510Z" level=info msg="Container to stop \"261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:11:18.108471 containerd[1590]: time="2025-09-12T23:11:18.106432111Z" level=info msg="shim disconnected" id=73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37 namespace=k8s.io Sep 12 23:11:18.108471 containerd[1590]: time="2025-09-12T23:11:18.106467289Z" level=warning msg="cleaning up after shim disconnected" id=73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37 namespace=k8s.io Sep 12 23:11:18.108471 containerd[1590]: time="2025-09-12T23:11:18.106476387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:11:18.110988 systemd[1]: cri-containerd-993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce.scope: Deactivated successfully. Sep 12 23:11:18.197062 containerd[1590]: time="2025-09-12T23:11:18.192066805Z" level=info msg="TaskExit event in podsandbox handler container_id:\"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" id:\"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" pid:2940 exit_status:137 exited_at:{seconds:1757718678 nanos:115200556}" Sep 12 23:11:18.206226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37-shm.mount: Deactivated successfully. Sep 12 23:11:18.206378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce-rootfs.mount: Deactivated successfully. Sep 12 23:11:18.220185 containerd[1590]: time="2025-09-12T23:11:18.218374443Z" level=info msg="TearDown network for sandbox \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" successfully" Sep 12 23:11:18.220185 containerd[1590]: time="2025-09-12T23:11:18.219640559Z" level=info msg="StopPodSandbox for \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" returns successfully" Sep 12 23:11:18.223844 containerd[1590]: time="2025-09-12T23:11:18.220002569Z" level=info msg="received exit event sandbox_id:\"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" exit_status:137 exited_at:{seconds:1757718677 nanos:919327203}" Sep 12 23:11:18.225106 containerd[1590]: time="2025-09-12T23:11:18.224807304Z" level=info msg="shim disconnected" id=993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce namespace=k8s.io Sep 12 23:11:18.225106 containerd[1590]: time="2025-09-12T23:11:18.224833745Z" level=warning msg="cleaning up after shim disconnected" id=993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce namespace=k8s.io Sep 12 23:11:18.225106 containerd[1590]: time="2025-09-12T23:11:18.224843584Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:11:18.225106 containerd[1590]: time="2025-09-12T23:11:18.225076624Z" level=info msg="TearDown network for sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" successfully" Sep 12 23:11:18.225106 containerd[1590]: time="2025-09-12T23:11:18.225098316Z" level=info msg="StopPodSandbox for \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" returns successfully" Sep 12 23:11:18.225319 containerd[1590]: time="2025-09-12T23:11:18.224850006Z" level=info msg="received exit event sandbox_id:\"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" exit_status:137 exited_at:{seconds:1757718678 nanos:115200556}" Sep 12 23:11:18.235984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce-shm.mount: Deactivated successfully. Sep 12 23:11:18.292833 kubelet[2793]: E0912 23:11:18.291187 2793 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 23:11:18.366050 kubelet[2793]: I0912 23:11:18.365953 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-etc-cni-netd\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366050 kubelet[2793]: I0912 23:11:18.366020 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-lib-modules\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366050 kubelet[2793]: I0912 23:11:18.366039 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-run\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366050 kubelet[2793]: I0912 23:11:18.366061 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-hostproc\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366379 kubelet[2793]: I0912 23:11:18.366092 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-clustermesh-secrets\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366379 kubelet[2793]: I0912 23:11:18.366113 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cni-path\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366379 kubelet[2793]: I0912 23:11:18.366140 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-config-path\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366379 kubelet[2793]: I0912 23:11:18.366161 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-xtables-lock\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366379 kubelet[2793]: I0912 23:11:18.366182 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-host-proc-sys-kernel\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366379 kubelet[2793]: I0912 23:11:18.366204 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-hubble-tls\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366594 kubelet[2793]: I0912 23:11:18.366228 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74cwt\" (UniqueName: \"kubernetes.io/projected/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-kube-api-access-74cwt\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366594 kubelet[2793]: I0912 23:11:18.366247 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-cgroup\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366594 kubelet[2793]: I0912 23:11:18.366272 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebf74b46-da1c-4e44-b6b6-77623d9c780f-cilium-config-path\") pod \"ebf74b46-da1c-4e44-b6b6-77623d9c780f\" (UID: \"ebf74b46-da1c-4e44-b6b6-77623d9c780f\") " Sep 12 23:11:18.366594 kubelet[2793]: I0912 23:11:18.366295 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-bpf-maps\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366594 kubelet[2793]: I0912 23:11:18.366326 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-host-proc-sys-net\") pod \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\" (UID: \"1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c\") " Sep 12 23:11:18.366594 kubelet[2793]: I0912 23:11:18.366356 2793 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk5sv\" (UniqueName: \"kubernetes.io/projected/ebf74b46-da1c-4e44-b6b6-77623d9c780f-kube-api-access-nk5sv\") pod \"ebf74b46-da1c-4e44-b6b6-77623d9c780f\" (UID: \"ebf74b46-da1c-4e44-b6b6-77623d9c780f\") " Sep 12 23:11:18.369659 kubelet[2793]: I0912 23:11:18.367026 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:11:18.369659 kubelet[2793]: I0912 23:11:18.367086 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:11:18.369659 kubelet[2793]: I0912 23:11:18.367105 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:11:18.369659 kubelet[2793]: I0912 23:11:18.367121 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:11:18.369659 kubelet[2793]: I0912 23:11:18.367139 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-hostproc" (OuterVolumeSpecName: "hostproc") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:11:18.369888 kubelet[2793]: I0912 23:11:18.367998 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cni-path" (OuterVolumeSpecName: "cni-path") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:11:18.370826 kubelet[2793]: I0912 23:11:18.370269 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:11:18.370826 kubelet[2793]: I0912 23:11:18.370313 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:11:18.378966 kubelet[2793]: I0912 23:11:18.378914 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:11:18.384684 kubelet[2793]: I0912 23:11:18.381261 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:11:18.387012 kubelet[2793]: I0912 23:11:18.386564 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:11:18.387405 kubelet[2793]: I0912 23:11:18.387318 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:11:18.390022 kubelet[2793]: I0912 23:11:18.389959 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebf74b46-da1c-4e44-b6b6-77623d9c780f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ebf74b46-da1c-4e44-b6b6-77623d9c780f" (UID: "ebf74b46-da1c-4e44-b6b6-77623d9c780f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:11:18.391082 kubelet[2793]: I0912 23:11:18.391000 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 23:11:18.391290 kubelet[2793]: I0912 23:11:18.391236 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf74b46-da1c-4e44-b6b6-77623d9c780f-kube-api-access-nk5sv" (OuterVolumeSpecName: "kube-api-access-nk5sv") pod "ebf74b46-da1c-4e44-b6b6-77623d9c780f" (UID: "ebf74b46-da1c-4e44-b6b6-77623d9c780f"). InnerVolumeSpecName "kube-api-access-nk5sv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:11:18.392424 kubelet[2793]: I0912 23:11:18.392354 2793 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-kube-api-access-74cwt" (OuterVolumeSpecName: "kube-api-access-74cwt") pod "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" (UID: "1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c"). InnerVolumeSpecName "kube-api-access-74cwt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:11:18.469351 kubelet[2793]: I0912 23:11:18.467358 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebf74b46-da1c-4e44-b6b6-77623d9c780f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.469351 kubelet[2793]: I0912 23:11:18.469152 2793 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471215 kubelet[2793]: I0912 23:11:18.469673 2793 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471215 kubelet[2793]: I0912 23:11:18.469690 2793 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nk5sv\" (UniqueName: \"kubernetes.io/projected/ebf74b46-da1c-4e44-b6b6-77623d9c780f-kube-api-access-nk5sv\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471215 kubelet[2793]: I0912 23:11:18.469702 2793 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471215 kubelet[2793]: I0912 23:11:18.469712 2793 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471215 kubelet[2793]: I0912 23:11:18.469722 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471215 kubelet[2793]: I0912 23:11:18.469732 2793 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471215 kubelet[2793]: I0912 23:11:18.469742 2793 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471215 kubelet[2793]: I0912 23:11:18.469751 2793 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471481 kubelet[2793]: I0912 23:11:18.469779 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471481 kubelet[2793]: I0912 23:11:18.469790 2793 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471481 kubelet[2793]: I0912 23:11:18.469799 2793 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471481 kubelet[2793]: I0912 23:11:18.469809 2793 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471481 kubelet[2793]: I0912 23:11:18.469818 2793 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-74cwt\" (UniqueName: \"kubernetes.io/projected/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-kube-api-access-74cwt\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.471481 kubelet[2793]: I0912 23:11:18.469827 2793 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 23:11:18.759574 systemd[1]: var-lib-kubelet-pods-ebf74b46\x2dda1c\x2d4e44\x2db6b6\x2d77623d9c780f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnk5sv.mount: Deactivated successfully. Sep 12 23:11:18.759733 systemd[1]: var-lib-kubelet-pods-1b7c7a1e\x2dd4be\x2d45bf\x2dba2c\x2d81df617bdf5c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74cwt.mount: Deactivated successfully. Sep 12 23:11:18.759837 systemd[1]: var-lib-kubelet-pods-1b7c7a1e\x2dd4be\x2d45bf\x2dba2c\x2d81df617bdf5c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 23:11:18.759927 systemd[1]: var-lib-kubelet-pods-1b7c7a1e\x2dd4be\x2d45bf\x2dba2c\x2d81df617bdf5c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 23:11:19.122724 kubelet[2793]: I0912 23:11:19.122679 2793 scope.go:117] "RemoveContainer" containerID="a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a" Sep 12 23:11:19.134890 containerd[1590]: time="2025-09-12T23:11:19.130420082Z" level=info msg="RemoveContainer for \"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\"" Sep 12 23:11:19.168430 systemd[1]: Removed slice kubepods-besteffort-podebf74b46_da1c_4e44_b6b6_77623d9c780f.slice - libcontainer container kubepods-besteffort-podebf74b46_da1c_4e44_b6b6_77623d9c780f.slice. Sep 12 23:11:19.171995 systemd[1]: Removed slice kubepods-burstable-pod1b7c7a1e_d4be_45bf_ba2c_81df617bdf5c.slice - libcontainer container kubepods-burstable-pod1b7c7a1e_d4be_45bf_ba2c_81df617bdf5c.slice. Sep 12 23:11:19.172134 systemd[1]: kubepods-burstable-pod1b7c7a1e_d4be_45bf_ba2c_81df617bdf5c.slice: Consumed 8.139s CPU time, 125.8M memory peak, 244K read from disk, 16.6M written to disk. Sep 12 23:11:19.178574 containerd[1590]: time="2025-09-12T23:11:19.178502863Z" level=info msg="RemoveContainer for \"a398d39794c82d7a2408623a68fc61eb784d379e9e1012809196ce366852ae2a\" returns successfully" Sep 12 23:11:19.178923 kubelet[2793]: I0912 23:11:19.178888 2793 scope.go:117] "RemoveContainer" containerID="302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1" Sep 12 23:11:19.181913 containerd[1590]: time="2025-09-12T23:11:19.180605146Z" level=info msg="RemoveContainer for \"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\"" Sep 12 23:11:19.199661 containerd[1590]: time="2025-09-12T23:11:19.197160997Z" level=info msg="RemoveContainer for \"302e8eedc2c0209a2d344d5a6510cf0eb5b701ab03ce2f496178337f0b3d1fd1\" returns successfully" Sep 12 23:11:19.199661 containerd[1590]: time="2025-09-12T23:11:19.198829532Z" level=info msg="RemoveContainer for \"a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a\"" Sep 12 23:11:19.199854 kubelet[2793]: I0912 23:11:19.197448 2793 scope.go:117] "RemoveContainer" containerID="a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a" Sep 12 23:11:19.225269 containerd[1590]: time="2025-09-12T23:11:19.224011767Z" level=info msg="RemoveContainer for \"a0ae79b35e6eb9195135d385c8f9ea01cdcb295a1586522fb92c69e67605069a\" returns successfully" Sep 12 23:11:19.225754 kubelet[2793]: I0912 23:11:19.224531 2793 scope.go:117] "RemoveContainer" containerID="261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4" Sep 12 23:11:19.233535 containerd[1590]: time="2025-09-12T23:11:19.233391418Z" level=info msg="RemoveContainer for \"261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4\"" Sep 12 23:11:19.251948 containerd[1590]: time="2025-09-12T23:11:19.251860978Z" level=info msg="RemoveContainer for \"261f3c3cec6c94d1a2c39ab74d5b7e96e727e6e8fe2bfc350f052748f0a978d4\" returns successfully" Sep 12 23:11:19.252367 kubelet[2793]: I0912 23:11:19.252323 2793 scope.go:117] "RemoveContainer" containerID="f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9" Sep 12 23:11:19.255489 containerd[1590]: time="2025-09-12T23:11:19.255409246Z" level=info msg="RemoveContainer for \"f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9\"" Sep 12 23:11:19.264543 containerd[1590]: time="2025-09-12T23:11:19.264484078Z" level=info msg="RemoveContainer for \"f292a7dbe2971ac8711df24906c6c2d54706c5e3660db9c351c358d9b0862fc9\" returns successfully" Sep 12 23:11:19.265907 kubelet[2793]: I0912 23:11:19.265865 2793 scope.go:117] "RemoveContainer" containerID="c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5" Sep 12 23:11:19.268590 containerd[1590]: time="2025-09-12T23:11:19.268145003Z" level=info msg="RemoveContainer for \"c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5\"" Sep 12 23:11:19.280671 containerd[1590]: time="2025-09-12T23:11:19.280608014Z" level=info msg="RemoveContainer for \"c1196405cf67b90d5e32159454cb92e80418b28cd11447d076778e4dd192a3b5\" returns successfully" Sep 12 23:11:19.523235 sshd[4478]: Connection closed by 10.0.0.1 port 32958 Sep 12 23:11:19.524331 sshd-session[4475]: pam_unix(sshd:session): session closed for user core Sep 12 23:11:19.543284 systemd[1]: sshd@29-10.0.0.144:22-10.0.0.1:32958.service: Deactivated successfully. Sep 12 23:11:19.550117 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 23:11:19.561400 systemd-logind[1573]: Session 30 logged out. Waiting for processes to exit. Sep 12 23:11:19.572276 systemd[1]: Started sshd@30-10.0.0.144:22-10.0.0.1:32974.service - OpenSSH per-connection server daemon (10.0.0.1:32974). Sep 12 23:11:19.579197 systemd-logind[1573]: Removed session 30. Sep 12 23:11:19.682048 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 32974 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:11:19.686184 sshd-session[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:11:19.706230 systemd-logind[1573]: New session 31 of user core. Sep 12 23:11:19.723444 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 12 23:11:20.204202 kubelet[2793]: E0912 23:11:20.203241 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-njc7q" podUID="ed4d4f27-cfbe-4a70-8eb2-3f84fe794720" Sep 12 23:11:20.716791 sshd[4637]: Connection closed by 10.0.0.1 port 32974 Sep 12 23:11:20.715198 sshd-session[4634]: pam_unix(sshd:session): session closed for user core Sep 12 23:11:20.741135 systemd[1]: sshd@30-10.0.0.144:22-10.0.0.1:32974.service: Deactivated successfully. Sep 12 23:11:20.758962 systemd[1]: session-31.scope: Deactivated successfully. Sep 12 23:11:20.765150 systemd-logind[1573]: Session 31 logged out. Waiting for processes to exit. Sep 12 23:11:20.767668 systemd[1]: Started sshd@31-10.0.0.144:22-10.0.0.1:57720.service - OpenSSH per-connection server daemon (10.0.0.1:57720). Sep 12 23:11:20.776276 systemd-logind[1573]: Removed session 31. Sep 12 23:11:20.778124 kubelet[2793]: I0912 23:11:20.776457 2793 memory_manager.go:355] "RemoveStaleState removing state" podUID="1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" containerName="cilium-agent" Sep 12 23:11:20.778124 kubelet[2793]: I0912 23:11:20.776493 2793 memory_manager.go:355] "RemoveStaleState removing state" podUID="ebf74b46-da1c-4e44-b6b6-77623d9c780f" containerName="cilium-operator" Sep 12 23:11:20.803830 systemd[1]: Created slice kubepods-burstable-podd2689306_0f9f_4072_a583_b051354265e7.slice - libcontainer container kubepods-burstable-podd2689306_0f9f_4072_a583_b051354265e7.slice. Sep 12 23:11:20.883853 sshd[4649]: Accepted publickey for core from 10.0.0.1 port 57720 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:11:20.889586 sshd-session[4649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:11:20.905964 kubelet[2793]: I0912 23:11:20.905850 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2689306-0f9f-4072-a583-b051354265e7-cilium-config-path\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.905964 kubelet[2793]: I0912 23:11:20.905916 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2689306-0f9f-4072-a583-b051354265e7-clustermesh-secrets\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.905964 kubelet[2793]: I0912 23:11:20.905948 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2689306-0f9f-4072-a583-b051354265e7-hubble-tls\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906187 kubelet[2793]: I0912 23:11:20.905978 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2689306-0f9f-4072-a583-b051354265e7-lib-modules\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906187 kubelet[2793]: I0912 23:11:20.906001 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2689306-0f9f-4072-a583-b051354265e7-xtables-lock\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906187 kubelet[2793]: I0912 23:11:20.906021 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d2689306-0f9f-4072-a583-b051354265e7-cilium-ipsec-secrets\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906187 kubelet[2793]: I0912 23:11:20.906045 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8tz\" (UniqueName: \"kubernetes.io/projected/d2689306-0f9f-4072-a583-b051354265e7-kube-api-access-xs8tz\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906187 kubelet[2793]: I0912 23:11:20.906071 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2689306-0f9f-4072-a583-b051354265e7-cilium-run\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906187 kubelet[2793]: I0912 23:11:20.906097 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2689306-0f9f-4072-a583-b051354265e7-etc-cni-netd\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906387 kubelet[2793]: I0912 23:11:20.906122 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2689306-0f9f-4072-a583-b051354265e7-bpf-maps\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906387 kubelet[2793]: I0912 23:11:20.906145 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2689306-0f9f-4072-a583-b051354265e7-cilium-cgroup\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906387 kubelet[2793]: I0912 23:11:20.906169 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2689306-0f9f-4072-a583-b051354265e7-cni-path\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906387 kubelet[2793]: I0912 23:11:20.906207 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2689306-0f9f-4072-a583-b051354265e7-hostproc\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906387 kubelet[2793]: I0912 23:11:20.906231 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2689306-0f9f-4072-a583-b051354265e7-host-proc-sys-net\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.906387 kubelet[2793]: I0912 23:11:20.906255 2793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2689306-0f9f-4072-a583-b051354265e7-host-proc-sys-kernel\") pod \"cilium-vspp7\" (UID: \"d2689306-0f9f-4072-a583-b051354265e7\") " pod="kube-system/cilium-vspp7" Sep 12 23:11:20.912047 systemd-logind[1573]: New session 32 of user core. Sep 12 23:11:20.927990 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 12 23:11:21.009163 sshd[4652]: Connection closed by 10.0.0.1 port 57720 Sep 12 23:11:21.007053 sshd-session[4649]: pam_unix(sshd:session): session closed for user core Sep 12 23:11:21.073243 systemd[1]: sshd@31-10.0.0.144:22-10.0.0.1:57720.service: Deactivated successfully. Sep 12 23:11:21.076226 systemd[1]: session-32.scope: Deactivated successfully. Sep 12 23:11:21.081104 systemd-logind[1573]: Session 32 logged out. Waiting for processes to exit. Sep 12 23:11:21.086373 systemd[1]: Started sshd@32-10.0.0.144:22-10.0.0.1:57724.service - OpenSSH per-connection server daemon (10.0.0.1:57724). Sep 12 23:11:21.087435 systemd-logind[1573]: Removed session 32. Sep 12 23:11:21.132409 kubelet[2793]: E0912 23:11:21.130138 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:21.132562 containerd[1590]: time="2025-09-12T23:11:21.130971714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vspp7,Uid:d2689306-0f9f-4072-a583-b051354265e7,Namespace:kube-system,Attempt:0,}" Sep 12 23:11:21.184538 containerd[1590]: time="2025-09-12T23:11:21.184455854Z" level=info msg="connecting to shim bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a" address="unix:///run/containerd/s/cc16ccffece635d683cdbcf98911f4113323f54f31aecd4d61f141ba6ac61e04" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:11:21.190084 sshd[4663]: Accepted publickey for core from 10.0.0.1 port 57724 ssh2: RSA SHA256:yYIxjrXQopGJXy2hREtBU3obW+AC5yBbC1aV8QR0JwE Sep 12 23:11:21.192546 sshd-session[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:11:21.202398 systemd-logind[1573]: New session 33 of user core. Sep 12 23:11:21.217120 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 12 23:11:21.217707 kubelet[2793]: I0912 23:11:21.217662 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c" path="/var/lib/kubelet/pods/1b7c7a1e-d4be-45bf-ba2c-81df617bdf5c/volumes" Sep 12 23:11:21.224173 kubelet[2793]: I0912 23:11:21.222410 2793 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf74b46-da1c-4e44-b6b6-77623d9c780f" path="/var/lib/kubelet/pods/ebf74b46-da1c-4e44-b6b6-77623d9c780f/volumes" Sep 12 23:11:21.281704 systemd[1]: Started cri-containerd-bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a.scope - libcontainer container bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a. Sep 12 23:11:21.368424 containerd[1590]: time="2025-09-12T23:11:21.368362489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vspp7,Uid:d2689306-0f9f-4072-a583-b051354265e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\"" Sep 12 23:11:21.371106 kubelet[2793]: E0912 23:11:21.370502 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:21.380118 containerd[1590]: time="2025-09-12T23:11:21.378511666Z" level=info msg="CreateContainer within sandbox \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 23:11:21.413086 containerd[1590]: time="2025-09-12T23:11:21.413019329Z" level=info msg="Container 6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:11:21.467935 containerd[1590]: time="2025-09-12T23:11:21.466158551Z" level=info msg="CreateContainer within sandbox \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784\"" Sep 12 23:11:21.467935 containerd[1590]: time="2025-09-12T23:11:21.467023804Z" level=info msg="StartContainer for \"6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784\"" Sep 12 23:11:21.481239 containerd[1590]: time="2025-09-12T23:11:21.479170795Z" level=info msg="connecting to shim 6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784" address="unix:///run/containerd/s/cc16ccffece635d683cdbcf98911f4113323f54f31aecd4d61f141ba6ac61e04" protocol=ttrpc version=3 Sep 12 23:11:21.522334 systemd[1]: Started cri-containerd-6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784.scope - libcontainer container 6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784. Sep 12 23:11:21.617434 systemd[1]: cri-containerd-6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784.scope: Deactivated successfully. Sep 12 23:11:21.621068 containerd[1590]: time="2025-09-12T23:11:21.621013375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784\" id:\"6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784\" pid:4734 exited_at:{seconds:1757718681 nanos:620426760}" Sep 12 23:11:21.720382 containerd[1590]: time="2025-09-12T23:11:21.718695276Z" level=info msg="received exit event container_id:\"6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784\" id:\"6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784\" pid:4734 exited_at:{seconds:1757718681 nanos:620426760}" Sep 12 23:11:21.720382 containerd[1590]: time="2025-09-12T23:11:21.720107838Z" level=info msg="StartContainer for \"6c2a50195e1a9253a57dd807eb1be7104b2fd38a3073ee1e3f5e33f8f29f1784\" returns successfully" Sep 12 23:11:22.170974 kubelet[2793]: E0912 23:11:22.166256 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:22.171257 containerd[1590]: time="2025-09-12T23:11:22.169946193Z" level=info msg="CreateContainer within sandbox \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 23:11:22.210968 kubelet[2793]: E0912 23:11:22.208562 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-njc7q" podUID="ed4d4f27-cfbe-4a70-8eb2-3f84fe794720" Sep 12 23:11:22.212098 kubelet[2793]: E0912 23:11:22.212037 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-74cg8" podUID="5631c6c5-f4b6-4a9f-ae75-0fe99bba2b77" Sep 12 23:11:22.243535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078606554.mount: Deactivated successfully. Sep 12 23:11:22.253191 containerd[1590]: time="2025-09-12T23:11:22.252996665Z" level=info msg="Container 06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:11:22.267849 containerd[1590]: time="2025-09-12T23:11:22.266032360Z" level=info msg="CreateContainer within sandbox \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89\"" Sep 12 23:11:22.269905 containerd[1590]: time="2025-09-12T23:11:22.268414929Z" level=info msg="StartContainer for \"06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89\"" Sep 12 23:11:22.269905 containerd[1590]: time="2025-09-12T23:11:22.269531037Z" level=info msg="connecting to shim 06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89" address="unix:///run/containerd/s/cc16ccffece635d683cdbcf98911f4113323f54f31aecd4d61f141ba6ac61e04" protocol=ttrpc version=3 Sep 12 23:11:22.312089 systemd[1]: Started cri-containerd-06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89.scope - libcontainer container 06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89. Sep 12 23:11:22.390197 containerd[1590]: time="2025-09-12T23:11:22.390079081Z" level=info msg="StartContainer for \"06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89\" returns successfully" Sep 12 23:11:22.400322 systemd[1]: cri-containerd-06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89.scope: Deactivated successfully. Sep 12 23:11:22.408351 containerd[1590]: time="2025-09-12T23:11:22.408311287Z" level=info msg="received exit event container_id:\"06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89\" id:\"06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89\" pid:4778 exited_at:{seconds:1757718682 nanos:407320411}" Sep 12 23:11:22.408786 containerd[1590]: time="2025-09-12T23:11:22.408617530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89\" id:\"06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89\" pid:4778 exited_at:{seconds:1757718682 nanos:407320411}" Sep 12 23:11:23.039381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06ead0df999ad545fb947d9a6062018e0f7fbde6715109419b5f2625574f0f89-rootfs.mount: Deactivated successfully. Sep 12 23:11:23.197169 kubelet[2793]: E0912 23:11:23.189072 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:23.207569 containerd[1590]: time="2025-09-12T23:11:23.195002759Z" level=info msg="CreateContainer within sandbox \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 23:11:23.277605 containerd[1590]: time="2025-09-12T23:11:23.277529633Z" level=info msg="Container 57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:11:23.281115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916535451.mount: Deactivated successfully. Sep 12 23:11:23.294591 containerd[1590]: time="2025-09-12T23:11:23.293405478Z" level=info msg="CreateContainer within sandbox \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6\"" Sep 12 23:11:23.297839 containerd[1590]: time="2025-09-12T23:11:23.295629951Z" level=info msg="StartContainer for \"57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6\"" Sep 12 23:11:23.302297 kubelet[2793]: E0912 23:11:23.302123 2793 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 23:11:23.303322 containerd[1590]: time="2025-09-12T23:11:23.303197853Z" level=info msg="connecting to shim 57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6" address="unix:///run/containerd/s/cc16ccffece635d683cdbcf98911f4113323f54f31aecd4d61f141ba6ac61e04" protocol=ttrpc version=3 Sep 12 23:11:23.351144 systemd[1]: Started cri-containerd-57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6.scope - libcontainer container 57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6. Sep 12 23:11:23.451293 systemd[1]: cri-containerd-57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6.scope: Deactivated successfully. Sep 12 23:11:23.453902 containerd[1590]: time="2025-09-12T23:11:23.453646597Z" level=info msg="received exit event container_id:\"57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6\" id:\"57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6\" pid:4821 exited_at:{seconds:1757718683 nanos:453413937}" Sep 12 23:11:23.453902 containerd[1590]: time="2025-09-12T23:11:23.453717324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6\" id:\"57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6\" pid:4821 exited_at:{seconds:1757718683 nanos:453413937}" Sep 12 23:11:23.470639 containerd[1590]: time="2025-09-12T23:11:23.470571722Z" level=info msg="StartContainer for \"57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6\" returns successfully" Sep 12 23:11:23.501519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57e6364820761a3827b62a4e8a0fbf122b3e880841b70fdf59fd9c2c19a2b4a6-rootfs.mount: Deactivated successfully. Sep 12 23:11:24.201201 kubelet[2793]: E0912 23:11:24.198314 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:24.209045 containerd[1590]: time="2025-09-12T23:11:24.206143867Z" level=info msg="CreateContainer within sandbox \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 23:11:24.211690 kubelet[2793]: E0912 23:11:24.206597 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-74cg8" podUID="5631c6c5-f4b6-4a9f-ae75-0fe99bba2b77" Sep 12 23:11:24.211690 kubelet[2793]: E0912 23:11:24.206991 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-njc7q" podUID="ed4d4f27-cfbe-4a70-8eb2-3f84fe794720" Sep 12 23:11:24.642613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402945833.mount: Deactivated successfully. Sep 12 23:11:24.646220 containerd[1590]: time="2025-09-12T23:11:24.645122650Z" level=info msg="Container 3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:11:24.677678 containerd[1590]: time="2025-09-12T23:11:24.677564329Z" level=info msg="CreateContainer within sandbox \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9\"" Sep 12 23:11:24.686913 containerd[1590]: time="2025-09-12T23:11:24.684095528Z" level=info msg="StartContainer for \"3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9\"" Sep 12 23:11:24.686913 containerd[1590]: time="2025-09-12T23:11:24.686566229Z" level=info msg="connecting to shim 3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9" address="unix:///run/containerd/s/cc16ccffece635d683cdbcf98911f4113323f54f31aecd4d61f141ba6ac61e04" protocol=ttrpc version=3 Sep 12 23:11:24.776204 systemd[1]: Started cri-containerd-3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9.scope - libcontainer container 3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9. Sep 12 23:11:24.863296 systemd[1]: cri-containerd-3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9.scope: Deactivated successfully. Sep 12 23:11:24.865165 containerd[1590]: time="2025-09-12T23:11:24.864003109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9\" id:\"3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9\" pid:4859 exited_at:{seconds:1757718684 nanos:863359773}" Sep 12 23:11:24.870938 containerd[1590]: time="2025-09-12T23:11:24.869712156Z" level=info msg="received exit event container_id:\"3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9\" id:\"3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9\" pid:4859 exited_at:{seconds:1757718684 nanos:863359773}" Sep 12 23:11:24.873106 containerd[1590]: time="2025-09-12T23:11:24.873048973Z" level=info msg="StartContainer for \"3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9\" returns successfully" Sep 12 23:11:24.909040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3007fa552eb75f33ef4acafabc6fb9a221674f31403d8e10ac1bd3f9d823fcf9-rootfs.mount: Deactivated successfully. Sep 12 23:11:25.206055 kubelet[2793]: E0912 23:11:25.205918 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:25.220187 containerd[1590]: time="2025-09-12T23:11:25.219829018Z" level=info msg="CreateContainer within sandbox \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 23:11:25.236561 kubelet[2793]: I0912 23:11:25.236450 2793 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T23:11:25Z","lastTransitionTime":"2025-09-12T23:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 23:11:25.241099 containerd[1590]: time="2025-09-12T23:11:25.240234583Z" level=info msg="Container ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:11:25.250233 containerd[1590]: time="2025-09-12T23:11:25.250170063Z" level=info msg="CreateContainer within sandbox \"bb00b06580a021fc539667a3693ba305f3330af2634a0e980d745747f826307a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6\"" Sep 12 23:11:25.250979 containerd[1590]: time="2025-09-12T23:11:25.250939573Z" level=info msg="StartContainer for \"ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6\"" Sep 12 23:11:25.252072 containerd[1590]: time="2025-09-12T23:11:25.252043008Z" level=info msg="connecting to shim ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6" address="unix:///run/containerd/s/cc16ccffece635d683cdbcf98911f4113323f54f31aecd4d61f141ba6ac61e04" protocol=ttrpc version=3 Sep 12 23:11:25.288100 systemd[1]: Started cri-containerd-ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6.scope - libcontainer container ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6. Sep 12 23:11:25.333864 containerd[1590]: time="2025-09-12T23:11:25.333795281Z" level=info msg="StartContainer for \"ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6\" returns successfully" Sep 12 23:11:25.409488 containerd[1590]: time="2025-09-12T23:11:25.409435423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6\" id:\"a9c30ef1ec41495692f675152689116ed2df33800de553a13cf44feb58c8fefd\" pid:4926 exited_at:{seconds:1757718685 nanos:409051429}" Sep 12 23:11:25.900974 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Sep 12 23:11:26.208103 kubelet[2793]: E0912 23:11:26.205861 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-74cg8" podUID="5631c6c5-f4b6-4a9f-ae75-0fe99bba2b77" Sep 12 23:11:26.208103 kubelet[2793]: E0912 23:11:26.206806 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-njc7q" podUID="ed4d4f27-cfbe-4a70-8eb2-3f84fe794720" Sep 12 23:11:26.239083 kubelet[2793]: E0912 23:11:26.237402 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:26.304244 kubelet[2793]: I0912 23:11:26.303303 2793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vspp7" podStartSLOduration=6.303271297 podStartE2EDuration="6.303271297s" podCreationTimestamp="2025-09-12 23:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:11:26.29808723 +0000 UTC m=+113.240426372" watchObservedRunningTime="2025-09-12 23:11:26.303271297 +0000 UTC m=+113.245610409" Sep 12 23:11:27.263277 kubelet[2793]: E0912 23:11:27.262376 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:28.165241 containerd[1590]: time="2025-09-12T23:11:28.165146655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6\" id:\"b2de8fddfbb7e7dc2651fb25df789b55390041454e684d654c72bada5121f049\" pid:5070 exit_status:1 exited_at:{seconds:1757718688 nanos:164729016}" Sep 12 23:11:28.203339 kubelet[2793]: E0912 23:11:28.203227 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-74cg8" podUID="5631c6c5-f4b6-4a9f-ae75-0fe99bba2b77" Sep 12 23:11:28.204041 kubelet[2793]: E0912 23:11:28.203860 2793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-njc7q" podUID="ed4d4f27-cfbe-4a70-8eb2-3f84fe794720" Sep 12 23:11:30.207810 kubelet[2793]: E0912 23:11:30.205999 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:30.207810 kubelet[2793]: E0912 23:11:30.207648 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:30.493711 containerd[1590]: time="2025-09-12T23:11:30.493511496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6\" id:\"fe02f1f9aa61f7933cd371c8673f01d2056abb32bb324bd26d7872e28b89e93b\" pid:5396 exit_status:1 exited_at:{seconds:1757718690 nanos:492054393}" Sep 12 23:11:30.711591 systemd-networkd[1480]: lxc_health: Link UP Sep 12 23:11:30.711967 systemd-networkd[1480]: lxc_health: Gained carrier Sep 12 23:11:31.139440 kubelet[2793]: E0912 23:11:31.139229 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:31.265077 kubelet[2793]: E0912 23:11:31.264995 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:32.268286 kubelet[2793]: E0912 23:11:32.268238 2793 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 23:11:32.603891 containerd[1590]: time="2025-09-12T23:11:32.603835839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6\" id:\"d89ee322262edd1956b85634e343f146374157390f591cc6a316faa3db21214a\" pid:5493 exited_at:{seconds:1757718692 nanos:603405565}" Sep 12 23:11:32.712167 systemd-networkd[1480]: lxc_health: Gained IPv6LL Sep 12 23:11:33.229277 containerd[1590]: time="2025-09-12T23:11:33.226054477Z" level=info msg="StopPodSandbox for \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\"" Sep 12 23:11:33.229459 containerd[1590]: time="2025-09-12T23:11:33.229384502Z" level=info msg="TearDown network for sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" successfully" Sep 12 23:11:33.229459 containerd[1590]: time="2025-09-12T23:11:33.229411615Z" level=info msg="StopPodSandbox for \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" returns successfully" Sep 12 23:11:33.247046 containerd[1590]: time="2025-09-12T23:11:33.246935360Z" level=info msg="RemovePodSandbox for \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\"" Sep 12 23:11:33.247046 containerd[1590]: time="2025-09-12T23:11:33.247058279Z" level=info msg="Forcibly stopping sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\"" Sep 12 23:11:33.247638 containerd[1590]: time="2025-09-12T23:11:33.247204021Z" level=info msg="TearDown network for sandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" successfully" Sep 12 23:11:33.261237 containerd[1590]: time="2025-09-12T23:11:33.261146065Z" level=info msg="Ensure that sandbox 993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce in task-service has been cleanup successfully" Sep 12 23:11:33.506913 containerd[1590]: time="2025-09-12T23:11:33.506678735Z" level=info msg="RemovePodSandbox \"993e4f193e79b0c050eb6dfb3d5124b234e4ceb25c2e61982d0dc8570075b8ce\" returns successfully" Sep 12 23:11:33.508623 containerd[1590]: time="2025-09-12T23:11:33.508499234Z" level=info msg="StopPodSandbox for \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\"" Sep 12 23:11:33.508998 containerd[1590]: time="2025-09-12T23:11:33.508669954Z" level=info msg="TearDown network for sandbox \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" successfully" Sep 12 23:11:33.508998 containerd[1590]: time="2025-09-12T23:11:33.508688360Z" level=info msg="StopPodSandbox for \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" returns successfully" Sep 12 23:11:33.509302 containerd[1590]: time="2025-09-12T23:11:33.509253335Z" level=info msg="RemovePodSandbox for \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\"" Sep 12 23:11:33.509302 containerd[1590]: time="2025-09-12T23:11:33.509289385Z" level=info msg="Forcibly stopping sandbox \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\"" Sep 12 23:11:33.509512 containerd[1590]: time="2025-09-12T23:11:33.509376083Z" level=info msg="TearDown network for sandbox \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" successfully" Sep 12 23:11:33.512392 containerd[1590]: time="2025-09-12T23:11:33.512310932Z" level=info msg="Ensure that sandbox 73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37 in task-service has been cleanup successfully" Sep 12 23:11:33.521183 containerd[1590]: time="2025-09-12T23:11:33.521095989Z" level=info msg="RemovePodSandbox \"73890b387f12e6c0d8f890b8a0c17a32291df62bf255c013d54f4848b8223b37\" returns successfully" Sep 12 23:11:34.848484 containerd[1590]: time="2025-09-12T23:11:34.848409804Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6\" id:\"a624815d86489fc4afef32f6400dfb2ed5e6e9c41cd9019b6f841abf344b9bb0\" pid:5523 exited_at:{seconds:1757718694 nanos:847513706}" Sep 12 23:11:37.025929 containerd[1590]: time="2025-09-12T23:11:37.025866074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac0820ad66688121c553fdb412af7eef4406b3fbd95e8fa28a8fd12879339aa6\" id:\"840a04a838784a88862890af021d1d78e69da51d0a9318d40ca03156f6cd927b\" pid:5562 exited_at:{seconds:1757718697 nanos:25280920}" Sep 12 23:11:37.037262 sshd[4693]: Connection closed by 10.0.0.1 port 57724 Sep 12 23:11:37.037931 sshd-session[4663]: pam_unix(sshd:session): session closed for user core Sep 12 23:11:37.044013 systemd[1]: sshd@32-10.0.0.144:22-10.0.0.1:57724.service: Deactivated successfully. Sep 12 23:11:37.046478 systemd[1]: session-33.scope: Deactivated successfully. Sep 12 23:11:37.047457 systemd-logind[1573]: Session 33 logged out. Waiting for processes to exit. Sep 12 23:11:37.049375 systemd-logind[1573]: Removed session 33.