Sep 12 00:16:38.817646 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 11 22:19:36 -00 2025 Sep 12 00:16:38.817677 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=04dc7b04a37dcb996cc4d6074b142179401d5685abf61ddcbaff7d77d0988990 Sep 12 00:16:38.817686 kernel: BIOS-provided physical RAM map: Sep 12 00:16:38.817692 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 00:16:38.817699 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 00:16:38.817705 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 00:16:38.817713 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 12 00:16:38.817723 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 12 00:16:38.817734 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 00:16:38.817743 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 00:16:38.817753 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 00:16:38.817761 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 00:16:38.817770 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 12 00:16:38.817777 kernel: NX (Execute Disable) protection: active Sep 12 00:16:38.817787 kernel: APIC: Static calls initialized Sep 12 00:16:38.817794 kernel: SMBIOS 2.8 present. Sep 12 00:16:38.817805 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 12 00:16:38.817812 kernel: DMI: Memory slots populated: 1/1 Sep 12 00:16:38.817819 kernel: Hypervisor detected: KVM Sep 12 00:16:38.817826 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 00:16:38.817833 kernel: kvm-clock: using sched offset of 4071310921 cycles Sep 12 00:16:38.817840 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 00:16:38.817848 kernel: tsc: Detected 2794.748 MHz processor Sep 12 00:16:38.817858 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 00:16:38.817865 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 00:16:38.817903 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 12 00:16:38.817911 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 00:16:38.817918 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 00:16:38.817925 kernel: Using GB pages for direct mapping Sep 12 00:16:38.817932 kernel: ACPI: Early table checksum verification disabled Sep 12 00:16:38.817940 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 12 00:16:38.817947 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:16:38.817957 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:16:38.817964 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:16:38.817971 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 12 00:16:38.817980 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:16:38.817987 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:16:38.817994 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:16:38.818001 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 00:16:38.818009 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 12 00:16:38.818021 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 12 00:16:38.818029 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 12 00:16:38.818036 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 12 00:16:38.818044 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 12 00:16:38.818051 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 12 00:16:38.818059 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 12 00:16:38.818068 kernel: No NUMA configuration found Sep 12 00:16:38.818075 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 12 00:16:38.818083 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 12 00:16:38.818090 kernel: Zone ranges: Sep 12 00:16:38.818098 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 00:16:38.818105 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 12 00:16:38.818113 kernel: Normal empty Sep 12 00:16:38.818120 kernel: Device empty Sep 12 00:16:38.818128 kernel: Movable zone start for each node Sep 12 00:16:38.818135 kernel: Early memory node ranges Sep 12 00:16:38.818144 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 00:16:38.818151 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 12 00:16:38.818159 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 12 00:16:38.818166 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 00:16:38.818174 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 00:16:38.818181 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 12 00:16:38.818189 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 00:16:38.818199 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 00:16:38.818206 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 00:16:38.818216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 00:16:38.818224 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 00:16:38.818233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 00:16:38.818241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 00:16:38.818248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 00:16:38.818256 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 00:16:38.818263 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 00:16:38.818271 kernel: TSC deadline timer available Sep 12 00:16:38.818278 kernel: CPU topo: Max. logical packages: 1 Sep 12 00:16:38.818288 kernel: CPU topo: Max. logical dies: 1 Sep 12 00:16:38.818295 kernel: CPU topo: Max. dies per package: 1 Sep 12 00:16:38.818302 kernel: CPU topo: Max. threads per core: 1 Sep 12 00:16:38.818310 kernel: CPU topo: Num. cores per package: 4 Sep 12 00:16:38.818317 kernel: CPU topo: Num. threads per package: 4 Sep 12 00:16:38.818324 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 12 00:16:38.818332 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 00:16:38.818339 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 00:16:38.818347 kernel: kvm-guest: setup PV sched yield Sep 12 00:16:38.818356 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 00:16:38.818364 kernel: Booting paravirtualized kernel on KVM Sep 12 00:16:38.818372 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 00:16:38.818379 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 00:16:38.818387 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 12 00:16:38.818394 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 12 00:16:38.818401 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 00:16:38.818409 kernel: kvm-guest: PV spinlocks enabled Sep 12 00:16:38.818416 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 00:16:38.818427 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=04dc7b04a37dcb996cc4d6074b142179401d5685abf61ddcbaff7d77d0988990 Sep 12 00:16:38.818435 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 00:16:38.818443 kernel: random: crng init done Sep 12 00:16:38.818450 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 00:16:38.818458 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 00:16:38.818465 kernel: Fallback order for Node 0: 0 Sep 12 00:16:38.818473 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 12 00:16:38.818480 kernel: Policy zone: DMA32 Sep 12 00:16:38.818490 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 00:16:38.818497 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 00:16:38.818505 kernel: ftrace: allocating 40120 entries in 157 pages Sep 12 00:16:38.818512 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 00:16:38.818520 kernel: Dynamic Preempt: voluntary Sep 12 00:16:38.818527 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 00:16:38.818535 kernel: rcu: RCU event tracing is enabled. Sep 12 00:16:38.818543 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 00:16:38.818550 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 00:16:38.818560 kernel: Rude variant of Tasks RCU enabled. Sep 12 00:16:38.818570 kernel: Tracing variant of Tasks RCU enabled. Sep 12 00:16:38.818578 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 00:16:38.818585 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 00:16:38.818593 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 00:16:38.818600 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 00:16:38.818608 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 00:16:38.818616 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 00:16:38.818623 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 00:16:38.818640 kernel: Console: colour VGA+ 80x25 Sep 12 00:16:38.818648 kernel: printk: legacy console [ttyS0] enabled Sep 12 00:16:38.818655 kernel: ACPI: Core revision 20240827 Sep 12 00:16:38.818666 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 00:16:38.818674 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 00:16:38.818682 kernel: x2apic enabled Sep 12 00:16:38.818692 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 00:16:38.818700 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 00:16:38.818708 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 00:16:38.818718 kernel: kvm-guest: setup PV IPIs Sep 12 00:16:38.818725 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 00:16:38.818733 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 00:16:38.818741 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 00:16:38.818749 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 00:16:38.818757 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 00:16:38.818765 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 00:16:38.818772 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 00:16:38.818782 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 00:16:38.818790 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 00:16:38.818798 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 00:16:38.818806 kernel: active return thunk: retbleed_return_thunk Sep 12 00:16:38.818813 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 00:16:38.818821 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 00:16:38.818829 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 00:16:38.818837 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 00:16:38.818845 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 00:16:38.818855 kernel: active return thunk: srso_return_thunk Sep 12 00:16:38.818863 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 00:16:38.818882 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 00:16:38.818898 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 00:16:38.818906 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 00:16:38.818913 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 00:16:38.818921 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 00:16:38.818929 kernel: Freeing SMP alternatives memory: 32K Sep 12 00:16:38.818939 kernel: pid_max: default: 32768 minimum: 301 Sep 12 00:16:38.818948 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 00:16:38.818955 kernel: landlock: Up and running. Sep 12 00:16:38.818963 kernel: SELinux: Initializing. Sep 12 00:16:38.818973 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 00:16:38.818982 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 00:16:38.818990 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 00:16:38.818997 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 00:16:38.819005 kernel: ... version: 0 Sep 12 00:16:38.819015 kernel: ... bit width: 48 Sep 12 00:16:38.819023 kernel: ... generic registers: 6 Sep 12 00:16:38.819031 kernel: ... value mask: 0000ffffffffffff Sep 12 00:16:38.819038 kernel: ... max period: 00007fffffffffff Sep 12 00:16:38.819046 kernel: ... fixed-purpose events: 0 Sep 12 00:16:38.819054 kernel: ... event mask: 000000000000003f Sep 12 00:16:38.819062 kernel: signal: max sigframe size: 1776 Sep 12 00:16:38.819069 kernel: rcu: Hierarchical SRCU implementation. Sep 12 00:16:38.819077 kernel: rcu: Max phase no-delay instances is 400. Sep 12 00:16:38.819087 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 00:16:38.819095 kernel: smp: Bringing up secondary CPUs ... Sep 12 00:16:38.819103 kernel: smpboot: x86: Booting SMP configuration: Sep 12 00:16:38.819111 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 00:16:38.819118 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 00:16:38.819126 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 00:16:38.819134 kernel: Memory: 2430964K/2571752K available (14336K kernel code, 2432K rwdata, 9960K rodata, 53836K init, 1080K bss, 134860K reserved, 0K cma-reserved) Sep 12 00:16:38.819142 kernel: devtmpfs: initialized Sep 12 00:16:38.819150 kernel: x86/mm: Memory block size: 128MB Sep 12 00:16:38.819160 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 00:16:38.819168 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 00:16:38.819178 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 00:16:38.819186 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 00:16:38.819194 kernel: audit: initializing netlink subsys (disabled) Sep 12 00:16:38.819201 kernel: audit: type=2000 audit(1757636195.850:1): state=initialized audit_enabled=0 res=1 Sep 12 00:16:38.819209 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 00:16:38.819217 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 00:16:38.819225 kernel: cpuidle: using governor menu Sep 12 00:16:38.819234 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 00:16:38.819242 kernel: dca service started, version 1.12.1 Sep 12 00:16:38.819250 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 12 00:16:38.819258 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 00:16:38.819266 kernel: PCI: Using configuration type 1 for base access Sep 12 00:16:38.819273 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 00:16:38.819281 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 00:16:38.819291 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 00:16:38.819300 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 00:16:38.819311 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 00:16:38.819320 kernel: ACPI: Added _OSI(Module Device) Sep 12 00:16:38.819327 kernel: ACPI: Added _OSI(Processor Device) Sep 12 00:16:38.819335 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 00:16:38.819343 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 00:16:38.819351 kernel: ACPI: Interpreter enabled Sep 12 00:16:38.819358 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 00:16:38.819366 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 00:16:38.819376 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 00:16:38.819387 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 00:16:38.819401 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 00:16:38.819411 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 00:16:38.819624 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 00:16:38.819750 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 00:16:38.819869 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 00:16:38.819900 kernel: PCI host bridge to bus 0000:00 Sep 12 00:16:38.820044 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 00:16:38.820160 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 00:16:38.820276 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 00:16:38.820384 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 00:16:38.820501 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 00:16:38.820610 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 12 00:16:38.820719 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 00:16:38.820882 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 12 00:16:38.821033 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 12 00:16:38.821156 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 12 00:16:38.821276 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 12 00:16:38.821394 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 12 00:16:38.821522 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 00:16:38.821662 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 00:16:38.821795 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 12 00:16:38.821953 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 12 00:16:38.822077 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 12 00:16:38.822214 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 12 00:16:38.822336 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 12 00:16:38.822457 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 12 00:16:38.822578 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 12 00:16:38.822722 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 12 00:16:38.822844 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 12 00:16:38.822993 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 12 00:16:38.823114 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 12 00:16:38.823247 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 12 00:16:38.823389 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 12 00:16:38.823516 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 00:16:38.823660 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 12 00:16:38.823781 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 12 00:16:38.823923 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 12 00:16:38.824062 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 12 00:16:38.824184 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 12 00:16:38.824195 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 00:16:38.824206 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 00:16:38.824214 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 00:16:38.824222 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 00:16:38.824230 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 00:16:38.824238 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 00:16:38.824246 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 00:16:38.824254 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 00:16:38.824261 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 00:16:38.824269 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 00:16:38.824280 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 00:16:38.824288 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 00:16:38.824295 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 00:16:38.824303 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 00:16:38.824311 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 00:16:38.824319 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 00:16:38.824327 kernel: iommu: Default domain type: Translated Sep 12 00:16:38.824335 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 00:16:38.824343 kernel: PCI: Using ACPI for IRQ routing Sep 12 00:16:38.824353 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 00:16:38.824361 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 00:16:38.824369 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 12 00:16:38.824489 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 00:16:38.824609 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 00:16:38.824738 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 00:16:38.824751 kernel: vgaarb: loaded Sep 12 00:16:38.824759 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 00:16:38.824771 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 00:16:38.824779 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 00:16:38.824787 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 00:16:38.824795 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 00:16:38.824803 kernel: pnp: PnP ACPI init Sep 12 00:16:38.824976 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 00:16:38.824988 kernel: pnp: PnP ACPI: found 6 devices Sep 12 00:16:38.824997 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 00:16:38.825008 kernel: NET: Registered PF_INET protocol family Sep 12 00:16:38.825016 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 00:16:38.825024 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 00:16:38.825032 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 00:16:38.825040 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 00:16:38.825048 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 00:16:38.825057 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 00:16:38.825065 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 00:16:38.825073 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 00:16:38.825083 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 00:16:38.825091 kernel: NET: Registered PF_XDP protocol family Sep 12 00:16:38.825204 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 00:16:38.825327 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 00:16:38.825438 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 00:16:38.825582 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 00:16:38.825725 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 00:16:38.825837 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 12 00:16:38.825851 kernel: PCI: CLS 0 bytes, default 64 Sep 12 00:16:38.825860 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 12 00:16:38.825884 kernel: Initialise system trusted keyrings Sep 12 00:16:38.825901 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 00:16:38.825909 kernel: Key type asymmetric registered Sep 12 00:16:38.825917 kernel: Asymmetric key parser 'x509' registered Sep 12 00:16:38.825925 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 00:16:38.825933 kernel: io scheduler mq-deadline registered Sep 12 00:16:38.825941 kernel: io scheduler kyber registered Sep 12 00:16:38.825952 kernel: io scheduler bfq registered Sep 12 00:16:38.825960 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 00:16:38.825968 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 00:16:38.825977 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 00:16:38.825985 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 00:16:38.825992 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 00:16:38.826001 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 00:16:38.826009 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 00:16:38.826017 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 00:16:38.826025 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 00:16:38.826035 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 00:16:38.826174 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 00:16:38.826291 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 00:16:38.826409 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T00:16:38 UTC (1757636198) Sep 12 00:16:38.826521 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 00:16:38.826532 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 00:16:38.826541 kernel: NET: Registered PF_INET6 protocol family Sep 12 00:16:38.826552 kernel: Segment Routing with IPv6 Sep 12 00:16:38.826560 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 00:16:38.826568 kernel: NET: Registered PF_PACKET protocol family Sep 12 00:16:38.826576 kernel: Key type dns_resolver registered Sep 12 00:16:38.826584 kernel: IPI shorthand broadcast: enabled Sep 12 00:16:38.826592 kernel: sched_clock: Marking stable (3243002957, 110277712)->(3371913702, -18633033) Sep 12 00:16:38.826600 kernel: registered taskstats version 1 Sep 12 00:16:38.826609 kernel: Loading compiled-in X.509 certificates Sep 12 00:16:38.826617 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 652e453facea91af3a07ba1d2bcc346a615f1cf9' Sep 12 00:16:38.826627 kernel: Demotion targets for Node 0: null Sep 12 00:16:38.826635 kernel: Key type .fscrypt registered Sep 12 00:16:38.826643 kernel: Key type fscrypt-provisioning registered Sep 12 00:16:38.826651 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 00:16:38.826659 kernel: ima: Allocated hash algorithm: sha1 Sep 12 00:16:38.826667 kernel: ima: No architecture policies found Sep 12 00:16:38.826675 kernel: clk: Disabling unused clocks Sep 12 00:16:38.826683 kernel: Warning: unable to open an initial console. Sep 12 00:16:38.826691 kernel: Freeing unused kernel image (initmem) memory: 53836K Sep 12 00:16:38.826705 kernel: Write protecting the kernel read-only data: 24576k Sep 12 00:16:38.826716 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 00:16:38.826727 kernel: Run /init as init process Sep 12 00:16:38.826737 kernel: with arguments: Sep 12 00:16:38.826748 kernel: /init Sep 12 00:16:38.826756 kernel: with environment: Sep 12 00:16:38.826763 kernel: HOME=/ Sep 12 00:16:38.826771 kernel: TERM=linux Sep 12 00:16:38.826779 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 00:16:38.826791 systemd[1]: Successfully made /usr/ read-only. Sep 12 00:16:38.826819 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 00:16:38.826835 systemd[1]: Detected virtualization kvm. Sep 12 00:16:38.826851 systemd[1]: Detected architecture x86-64. Sep 12 00:16:38.826867 systemd[1]: Running in initrd. Sep 12 00:16:38.826911 systemd[1]: No hostname configured, using default hostname. Sep 12 00:16:38.826921 systemd[1]: Hostname set to . Sep 12 00:16:38.826929 systemd[1]: Initializing machine ID from VM UUID. Sep 12 00:16:38.826938 systemd[1]: Queued start job for default target initrd.target. Sep 12 00:16:38.826947 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 00:16:38.826957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 00:16:38.826967 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 00:16:38.826976 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 00:16:38.826987 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 00:16:38.826997 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 00:16:38.827007 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 00:16:38.827016 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 00:16:38.827025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 00:16:38.827033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 00:16:38.827042 systemd[1]: Reached target paths.target - Path Units. Sep 12 00:16:38.827053 systemd[1]: Reached target slices.target - Slice Units. Sep 12 00:16:38.827062 systemd[1]: Reached target swap.target - Swaps. Sep 12 00:16:38.827072 systemd[1]: Reached target timers.target - Timer Units. Sep 12 00:16:38.827081 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 00:16:38.827090 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 00:16:38.827099 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 00:16:38.827108 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 00:16:38.827117 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 00:16:38.827125 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 00:16:38.827136 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 00:16:38.827145 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 00:16:38.827154 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 00:16:38.827163 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 00:16:38.827174 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 00:16:38.827186 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 00:16:38.827195 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 00:16:38.827203 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 00:16:38.827222 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 00:16:38.827233 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:16:38.827251 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 00:16:38.827263 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 00:16:38.827272 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 00:16:38.827301 systemd-journald[221]: Collecting audit messages is disabled. Sep 12 00:16:38.827325 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 00:16:38.827334 systemd-journald[221]: Journal started Sep 12 00:16:38.827353 systemd-journald[221]: Runtime Journal (/run/log/journal/633dc625f2ff495c8ed0c49a510625a2) is 6M, max 48.6M, 42.5M free. Sep 12 00:16:38.827317 systemd-modules-load[222]: Inserted module 'overlay' Sep 12 00:16:38.830708 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 00:16:38.834522 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 00:16:38.873900 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 00:16:38.876238 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 12 00:16:38.877270 kernel: Bridge firewalling registered Sep 12 00:16:38.879049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:16:38.881662 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 00:16:38.883273 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 00:16:38.887850 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 00:16:38.888614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 00:16:38.888657 systemd-tmpfiles[237]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 00:16:38.893990 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 00:16:38.896096 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 00:16:38.914012 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 00:16:38.914302 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 00:16:38.921420 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 00:16:38.928724 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 00:16:38.938751 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 00:16:38.962152 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=04dc7b04a37dcb996cc4d6074b142179401d5685abf61ddcbaff7d77d0988990 Sep 12 00:16:38.988460 systemd-resolved[259]: Positive Trust Anchors: Sep 12 00:16:38.988492 systemd-resolved[259]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 00:16:38.988532 systemd-resolved[259]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 00:16:38.992491 systemd-resolved[259]: Defaulting to hostname 'linux'. Sep 12 00:16:38.994204 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 00:16:38.999704 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 00:16:39.096931 kernel: SCSI subsystem initialized Sep 12 00:16:39.105919 kernel: Loading iSCSI transport class v2.0-870. Sep 12 00:16:39.116919 kernel: iscsi: registered transport (tcp) Sep 12 00:16:39.138942 kernel: iscsi: registered transport (qla4xxx) Sep 12 00:16:39.139034 kernel: QLogic iSCSI HBA Driver Sep 12 00:16:39.163051 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 00:16:39.181493 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 00:16:39.181940 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 00:16:39.250134 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 00:16:39.251815 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 00:16:39.312947 kernel: raid6: avx2x4 gen() 29493 MB/s Sep 12 00:16:39.329937 kernel: raid6: avx2x2 gen() 30303 MB/s Sep 12 00:16:39.347024 kernel: raid6: avx2x1 gen() 25085 MB/s Sep 12 00:16:39.347120 kernel: raid6: using algorithm avx2x2 gen() 30303 MB/s Sep 12 00:16:39.365055 kernel: raid6: .... xor() 18848 MB/s, rmw enabled Sep 12 00:16:39.365164 kernel: raid6: using avx2x2 recovery algorithm Sep 12 00:16:39.386925 kernel: xor: automatically using best checksumming function avx Sep 12 00:16:39.606932 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 00:16:39.615283 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 00:16:39.618064 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 00:16:39.651202 systemd-udevd[475]: Using default interface naming scheme 'v255'. Sep 12 00:16:39.657202 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 00:16:39.660723 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 00:16:39.693724 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Sep 12 00:16:39.722638 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 00:16:39.726052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 00:16:39.837432 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 00:16:39.841223 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 00:16:39.880506 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 00:16:39.880713 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 00:16:39.909339 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 00:16:39.919715 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 00:16:39.919742 kernel: GPT:9289727 != 19775487 Sep 12 00:16:39.919753 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 00:16:39.919763 kernel: GPT:9289727 != 19775487 Sep 12 00:16:39.919773 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 00:16:39.919783 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:16:39.923804 kernel: AES CTR mode by8 optimization enabled Sep 12 00:16:39.923902 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 12 00:16:39.937519 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 00:16:39.939604 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:16:39.941296 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:16:39.946819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:16:39.949495 kernel: libata version 3.00 loaded. Sep 12 00:16:39.951290 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 00:16:39.959912 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 00:16:39.964726 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 00:16:39.964757 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 12 00:16:39.964961 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 12 00:16:39.965108 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 00:16:39.971892 kernel: scsi host0: ahci Sep 12 00:16:39.972890 kernel: scsi host1: ahci Sep 12 00:16:39.973065 kernel: scsi host2: ahci Sep 12 00:16:39.973956 kernel: scsi host3: ahci Sep 12 00:16:39.974132 kernel: scsi host4: ahci Sep 12 00:16:39.981907 kernel: scsi host5: ahci Sep 12 00:16:39.982105 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 12 00:16:39.982117 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 12 00:16:39.982128 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 12 00:16:39.982142 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 12 00:16:39.982152 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 12 00:16:39.982162 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 12 00:16:39.983186 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 00:16:40.036522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:16:40.048399 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 00:16:40.070621 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 00:16:40.071947 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 00:16:40.084927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 00:16:40.088270 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 00:16:40.291920 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 00:16:40.292020 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 00:16:40.292906 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 00:16:40.293923 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 00:16:40.294013 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 00:16:40.294326 kernel: ata3.00: applying bridge limits Sep 12 00:16:40.295905 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 00:16:40.295938 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 00:16:40.296911 kernel: ata3.00: LPM support broken, forcing max_power Sep 12 00:16:40.297953 kernel: ata3.00: configured for UDMA/100 Sep 12 00:16:40.298901 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 00:16:40.300902 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 00:16:40.349184 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 00:16:40.349453 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 00:16:40.363987 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 00:16:40.661535 disk-uuid[636]: Primary Header is updated. Sep 12 00:16:40.661535 disk-uuid[636]: Secondary Entries is updated. Sep 12 00:16:40.661535 disk-uuid[636]: Secondary Header is updated. Sep 12 00:16:40.665149 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:16:40.704908 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:16:40.805840 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 00:16:40.836655 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 00:16:40.839169 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 00:16:40.841427 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 00:16:40.844226 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 00:16:40.875169 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 00:16:41.671903 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 00:16:41.672479 disk-uuid[641]: The operation has completed successfully. Sep 12 00:16:41.707655 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 00:16:41.707774 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 00:16:41.740825 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 00:16:41.759191 sh[665]: Success Sep 12 00:16:41.778583 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 00:16:41.778650 kernel: device-mapper: uevent: version 1.0.3 Sep 12 00:16:41.778664 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 00:16:41.787941 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 00:16:41.822651 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 00:16:41.826788 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 00:16:41.851190 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 00:16:41.859816 kernel: BTRFS: device fsid e375903e-484e-4702-81f7-5fa3109f1a1c devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (677) Sep 12 00:16:41.859862 kernel: BTRFS info (device dm-0): first mount of filesystem e375903e-484e-4702-81f7-5fa3109f1a1c Sep 12 00:16:41.859897 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:16:41.865904 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 00:16:41.865939 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 00:16:41.866965 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 00:16:41.869124 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 00:16:41.871358 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 00:16:41.873941 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 00:16:41.876501 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 00:16:41.896903 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Sep 12 00:16:41.899034 kernel: BTRFS info (device vda6): first mount of filesystem 8d27ac3b-8168-4900-a00e-1d8b6b830700 Sep 12 00:16:41.899060 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:16:41.902002 kernel: BTRFS info (device vda6): turning on async discard Sep 12 00:16:41.902027 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 00:16:41.906930 kernel: BTRFS info (device vda6): last unmount of filesystem 8d27ac3b-8168-4900-a00e-1d8b6b830700 Sep 12 00:16:41.907610 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 00:16:41.910522 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 00:16:42.140108 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 00:16:42.143638 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 00:16:42.201800 ignition[753]: Ignition 2.21.0 Sep 12 00:16:42.201813 ignition[753]: Stage: fetch-offline Sep 12 00:16:42.201862 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 12 00:16:42.201887 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:16:42.202029 ignition[753]: parsed url from cmdline: "" Sep 12 00:16:42.202033 ignition[753]: no config URL provided Sep 12 00:16:42.202038 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 00:16:42.202046 ignition[753]: no config at "/usr/lib/ignition/user.ign" Sep 12 00:16:42.202075 ignition[753]: op(1): [started] loading QEMU firmware config module Sep 12 00:16:42.202080 ignition[753]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 00:16:42.214252 ignition[753]: op(1): [finished] loading QEMU firmware config module Sep 12 00:16:42.214281 ignition[753]: QEMU firmware config was not found. Ignoring... Sep 12 00:16:42.223184 systemd-networkd[852]: lo: Link UP Sep 12 00:16:42.223194 systemd-networkd[852]: lo: Gained carrier Sep 12 00:16:42.224795 systemd-networkd[852]: Enumeration completed Sep 12 00:16:42.224958 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 00:16:42.225624 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:16:42.225628 systemd-networkd[852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 00:16:42.226072 systemd-networkd[852]: eth0: Link UP Sep 12 00:16:42.227028 systemd-networkd[852]: eth0: Gained carrier Sep 12 00:16:42.227037 systemd-networkd[852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:16:42.227397 systemd[1]: Reached target network.target - Network. Sep 12 00:16:42.244921 systemd-networkd[852]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 00:16:42.267358 ignition[753]: parsing config with SHA512: 125420b51aea1e9492948e06ab2406979e901baafee588118840910597f3f77f36a4b9be5f706a53f3df858570d3d1294ddc46b7bb4b365d0ca898505d77b2f1 Sep 12 00:16:42.271518 unknown[753]: fetched base config from "system" Sep 12 00:16:42.271530 unknown[753]: fetched user config from "qemu" Sep 12 00:16:42.272049 ignition[753]: fetch-offline: fetch-offline passed Sep 12 00:16:42.272141 ignition[753]: Ignition finished successfully Sep 12 00:16:42.274990 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 00:16:42.276831 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 00:16:42.277675 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 00:16:42.330954 ignition[861]: Ignition 2.21.0 Sep 12 00:16:42.330966 ignition[861]: Stage: kargs Sep 12 00:16:42.331103 ignition[861]: no configs at "/usr/lib/ignition/base.d" Sep 12 00:16:42.331114 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:16:42.334440 ignition[861]: kargs: kargs passed Sep 12 00:16:42.334521 ignition[861]: Ignition finished successfully Sep 12 00:16:42.339262 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 00:16:42.342112 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 00:16:42.375667 ignition[869]: Ignition 2.21.0 Sep 12 00:16:42.375680 ignition[869]: Stage: disks Sep 12 00:16:42.375806 ignition[869]: no configs at "/usr/lib/ignition/base.d" Sep 12 00:16:42.375817 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:16:42.379730 ignition[869]: disks: disks passed Sep 12 00:16:42.380221 ignition[869]: Ignition finished successfully Sep 12 00:16:42.383719 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 00:16:42.384977 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 00:16:42.385729 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 00:16:42.388358 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 00:16:42.390685 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 00:16:42.393004 systemd[1]: Reached target basic.target - Basic System. Sep 12 00:16:42.396614 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 00:16:42.429540 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 00:16:42.437496 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 00:16:42.440677 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 00:16:42.572894 kernel: EXT4-fs (vda9): mounted filesystem c7fbf20f-7fc7-47c1-8781-0f8569841f1e r/w with ordered data mode. Quota mode: none. Sep 12 00:16:42.573327 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 00:16:42.573964 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 00:16:42.577332 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 00:16:42.579184 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 00:16:42.580239 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 00:16:42.580278 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 00:16:42.580302 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 00:16:42.595233 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 00:16:42.598173 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 00:16:42.603442 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Sep 12 00:16:42.603467 kernel: BTRFS info (device vda6): first mount of filesystem 8d27ac3b-8168-4900-a00e-1d8b6b830700 Sep 12 00:16:42.603483 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:16:42.605940 kernel: BTRFS info (device vda6): turning on async discard Sep 12 00:16:42.605968 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 00:16:42.607987 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 00:16:42.638174 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 00:16:42.642303 initrd-setup-root[918]: cut: /sysroot/etc/group: No such file or directory Sep 12 00:16:42.646701 initrd-setup-root[925]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 00:16:42.651426 initrd-setup-root[932]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 00:16:42.736979 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 00:16:42.738017 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 00:16:42.738744 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 00:16:42.755936 kernel: BTRFS info (device vda6): last unmount of filesystem 8d27ac3b-8168-4900-a00e-1d8b6b830700 Sep 12 00:16:42.768082 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 00:16:42.783442 ignition[1001]: INFO : Ignition 2.21.0 Sep 12 00:16:42.783442 ignition[1001]: INFO : Stage: mount Sep 12 00:16:42.785192 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 00:16:42.785192 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:16:42.789099 ignition[1001]: INFO : mount: mount passed Sep 12 00:16:42.789860 ignition[1001]: INFO : Ignition finished successfully Sep 12 00:16:42.793603 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 00:16:42.796842 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 00:16:42.859106 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 00:16:42.862160 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 00:16:43.006908 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1013) Sep 12 00:16:43.006956 kernel: BTRFS info (device vda6): first mount of filesystem 8d27ac3b-8168-4900-a00e-1d8b6b830700 Sep 12 00:16:43.008899 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 00:16:43.011892 kernel: BTRFS info (device vda6): turning on async discard Sep 12 00:16:43.011909 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 00:16:43.013642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 00:16:43.056990 ignition[1030]: INFO : Ignition 2.21.0 Sep 12 00:16:43.056990 ignition[1030]: INFO : Stage: files Sep 12 00:16:43.059009 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 00:16:43.059009 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:16:43.063282 ignition[1030]: DEBUG : files: compiled without relabeling support, skipping Sep 12 00:16:43.065468 ignition[1030]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 00:16:43.065468 ignition[1030]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 00:16:43.070436 ignition[1030]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 00:16:43.072043 ignition[1030]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 00:16:43.073748 unknown[1030]: wrote ssh authorized keys file for user: core Sep 12 00:16:43.074860 ignition[1030]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 00:16:43.077689 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 00:16:43.079691 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 00:16:43.331312 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 00:16:43.525107 systemd-networkd[852]: eth0: Gained IPv6LL Sep 12 00:16:43.920329 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 00:16:43.922564 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 00:16:43.922564 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 00:16:43.922564 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 00:16:43.922564 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 00:16:43.922564 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 00:16:43.922564 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 00:16:43.922564 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 00:16:43.922564 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 00:16:43.936759 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 00:16:43.936759 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 00:16:43.936759 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:16:43.936759 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:16:43.936759 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:16:43.936759 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 00:16:44.407377 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 00:16:45.184520 ignition[1030]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 00:16:45.184520 ignition[1030]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 00:16:45.188156 ignition[1030]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 00:16:45.195023 ignition[1030]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 00:16:45.195023 ignition[1030]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 00:16:45.195023 ignition[1030]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 00:16:45.199795 ignition[1030]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 00:16:45.199795 ignition[1030]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 00:16:45.199795 ignition[1030]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 00:16:45.199795 ignition[1030]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 00:16:45.218228 ignition[1030]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 00:16:45.319925 ignition[1030]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 00:16:45.321954 ignition[1030]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 00:16:45.321954 ignition[1030]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 00:16:45.324908 ignition[1030]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 00:16:45.326548 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 00:16:45.328380 ignition[1030]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 00:16:45.330093 ignition[1030]: INFO : files: files passed Sep 12 00:16:45.330841 ignition[1030]: INFO : Ignition finished successfully Sep 12 00:16:45.334798 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 00:16:45.337063 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 00:16:45.339295 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 00:16:45.354082 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 00:16:45.354271 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 00:16:45.358580 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 00:16:45.361946 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 00:16:45.364019 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 00:16:45.365495 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 00:16:45.368057 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 00:16:45.368567 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 00:16:45.371429 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 00:16:45.428994 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 00:16:45.429118 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 00:16:45.432337 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 00:16:45.434224 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 00:16:45.434349 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 00:16:45.436184 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 00:16:45.466720 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 00:16:45.469486 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 00:16:45.493937 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 00:16:45.494088 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 00:16:45.497257 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 00:16:45.498369 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 00:16:45.498480 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 00:16:45.502961 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 00:16:45.503091 systemd[1]: Stopped target basic.target - Basic System. Sep 12 00:16:45.505073 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 00:16:45.505520 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 00:16:45.505849 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 00:16:45.506342 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 00:16:45.506658 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 00:16:45.506998 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 00:16:45.507626 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 00:16:45.520139 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 00:16:45.520317 systemd[1]: Stopped target swap.target - Swaps. Sep 12 00:16:45.522097 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 00:16:45.522256 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 00:16:45.526407 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 00:16:45.527585 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 00:16:45.529680 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 00:16:45.531921 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 00:16:45.534380 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 00:16:45.534561 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 00:16:45.537565 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 00:16:45.537781 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 00:16:45.539070 systemd[1]: Stopped target paths.target - Path Units. Sep 12 00:16:45.539436 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 00:16:45.546009 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 00:16:45.548765 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 00:16:45.549789 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 00:16:45.550753 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 00:16:45.550915 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 00:16:45.552422 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 00:16:45.552537 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 00:16:45.554127 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 00:16:45.554301 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 00:16:45.555903 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 00:16:45.556057 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 00:16:45.559604 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 00:16:45.560626 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 00:16:45.560784 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 00:16:45.565654 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 00:16:45.566759 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 00:16:45.566979 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 00:16:45.569149 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 00:16:45.569338 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 00:16:45.579415 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 00:16:45.579579 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 00:16:45.631631 ignition[1086]: INFO : Ignition 2.21.0 Sep 12 00:16:45.631631 ignition[1086]: INFO : Stage: umount Sep 12 00:16:45.633555 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 00:16:45.633555 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 00:16:45.633555 ignition[1086]: INFO : umount: umount passed Sep 12 00:16:45.633555 ignition[1086]: INFO : Ignition finished successfully Sep 12 00:16:45.639894 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 00:16:45.640083 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 00:16:45.641426 systemd[1]: Stopped target network.target - Network. Sep 12 00:16:45.644329 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 00:16:45.644406 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 00:16:45.647365 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 00:16:45.647433 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 00:16:45.650122 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 00:16:45.650204 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 00:16:45.652212 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 00:16:45.652272 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 00:16:45.653522 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 00:16:45.658110 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 00:16:45.661764 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 00:16:45.662538 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 00:16:45.662685 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 00:16:45.667360 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 00:16:45.667676 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 00:16:45.667839 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 00:16:45.672445 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 00:16:45.672769 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 00:16:45.672924 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 00:16:45.677059 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 00:16:45.678314 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 00:16:45.678369 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 00:16:45.680536 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 00:16:45.680599 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 00:16:45.683720 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 00:16:45.685406 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 00:16:45.685463 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 00:16:45.687610 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 00:16:45.687657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 00:16:45.690492 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 00:16:45.690539 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 00:16:45.691748 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 00:16:45.691798 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 00:16:45.695543 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 00:16:45.698508 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 00:16:45.698575 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 00:16:45.715306 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 00:16:45.715505 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 00:16:45.716813 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 00:16:45.716950 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 00:16:45.719258 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 00:16:45.719326 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 00:16:45.720480 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 00:16:45.720518 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 00:16:45.722288 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 00:16:45.722344 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 00:16:45.723073 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 00:16:45.723115 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 00:16:45.723732 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 00:16:45.723777 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 00:16:45.725319 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 00:16:45.732160 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 00:16:45.732212 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 00:16:45.736191 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 00:16:45.736240 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 00:16:45.740682 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 00:16:45.740764 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:16:45.746437 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 00:16:45.746498 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 00:16:45.746545 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 00:16:45.758135 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 00:16:45.758255 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 00:16:45.761551 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 00:16:45.762386 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 00:16:45.778710 systemd[1]: Switching root. Sep 12 00:16:45.823564 systemd-journald[221]: Journal stopped Sep 12 00:16:47.094586 systemd-journald[221]: Received SIGTERM from PID 1 (systemd). Sep 12 00:16:47.094658 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 00:16:47.094672 kernel: SELinux: policy capability open_perms=1 Sep 12 00:16:47.094684 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 00:16:47.094695 kernel: SELinux: policy capability always_check_network=0 Sep 12 00:16:47.094711 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 00:16:47.094722 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 00:16:47.094738 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 00:16:47.094749 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 00:16:47.094768 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 00:16:47.094787 kernel: audit: type=1403 audit(1757636206.186:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 00:16:47.094800 systemd[1]: Successfully loaded SELinux policy in 48.089ms. Sep 12 00:16:47.094828 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.683ms. Sep 12 00:16:47.094841 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 00:16:47.094854 systemd[1]: Detected virtualization kvm. Sep 12 00:16:47.094865 systemd[1]: Detected architecture x86-64. Sep 12 00:16:47.094897 systemd[1]: Detected first boot. Sep 12 00:16:47.094912 systemd[1]: Initializing machine ID from VM UUID. Sep 12 00:16:47.094923 kernel: Guest personality initialized and is inactive Sep 12 00:16:47.094936 zram_generator::config[1132]: No configuration found. Sep 12 00:16:47.094949 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 12 00:16:47.094963 kernel: Initialized host personality Sep 12 00:16:47.094974 kernel: NET: Registered PF_VSOCK protocol family Sep 12 00:16:47.094985 systemd[1]: Populated /etc with preset unit settings. Sep 12 00:16:47.095002 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 00:16:47.095014 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 00:16:47.095028 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 00:16:47.095040 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 00:16:47.095052 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 00:16:47.095064 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 00:16:47.095076 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 00:16:47.095088 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 00:16:47.095100 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 00:16:47.095117 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 00:16:47.095129 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 00:16:47.095141 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 00:16:47.095154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 00:16:47.095166 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 00:16:47.095179 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 00:16:47.095191 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 00:16:47.095203 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 00:16:47.095222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 00:16:47.095234 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 00:16:47.095246 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 00:16:47.095258 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 00:16:47.095270 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 00:16:47.095282 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 00:16:47.095298 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 00:16:47.095309 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 00:16:47.095326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 00:16:47.095338 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 00:16:47.095349 systemd[1]: Reached target slices.target - Slice Units. Sep 12 00:16:47.095361 systemd[1]: Reached target swap.target - Swaps. Sep 12 00:16:47.095373 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 00:16:47.095388 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 00:16:47.095400 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 00:16:47.095412 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 00:16:47.095424 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 00:16:47.095436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 00:16:47.095458 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 00:16:47.095474 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 00:16:47.095487 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 00:16:47.095498 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 00:16:47.095510 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:16:47.095522 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 00:16:47.095534 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 00:16:47.095546 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 00:16:47.095564 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 00:16:47.095576 systemd[1]: Reached target machines.target - Containers. Sep 12 00:16:47.095588 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 00:16:47.095604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:16:47.095616 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 00:16:47.095628 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 00:16:47.095640 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:16:47.095659 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 00:16:47.095671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:16:47.095690 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 00:16:47.095703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:16:47.095715 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 00:16:47.095727 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 00:16:47.095739 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 00:16:47.095751 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 00:16:47.095763 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 00:16:47.095776 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:16:47.095792 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 00:16:47.095804 kernel: fuse: init (API version 7.41) Sep 12 00:16:47.095815 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 00:16:47.095827 kernel: loop: module loaded Sep 12 00:16:47.095839 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 00:16:47.095851 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 00:16:47.095863 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 00:16:47.095890 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 00:16:47.095908 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 00:16:47.095920 systemd[1]: Stopped verity-setup.service. Sep 12 00:16:47.095944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:16:47.095959 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 00:16:47.095972 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 00:16:47.095985 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 00:16:47.096003 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 00:16:47.096015 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 00:16:47.096026 kernel: ACPI: bus type drm_connector registered Sep 12 00:16:47.096041 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 00:16:47.096057 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 00:16:47.096073 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 00:16:47.096086 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 00:16:47.096098 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 00:16:47.096110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:16:47.096122 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:16:47.096137 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 00:16:47.096171 systemd-journald[1203]: Collecting audit messages is disabled. Sep 12 00:16:47.096196 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 00:16:47.096213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:16:47.096226 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:16:47.096237 systemd-journald[1203]: Journal started Sep 12 00:16:47.096260 systemd-journald[1203]: Runtime Journal (/run/log/journal/633dc625f2ff495c8ed0c49a510625a2) is 6M, max 48.6M, 42.5M free. Sep 12 00:16:46.839226 systemd[1]: Queued start job for default target multi-user.target. Sep 12 00:16:46.860036 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 00:16:46.860537 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 00:16:47.099895 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 00:16:47.101796 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 00:16:47.102315 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 00:16:47.103760 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:16:47.104026 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:16:47.105456 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 00:16:47.107038 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 00:16:47.108708 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 00:16:47.110436 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 00:16:47.128422 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 00:16:47.131184 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 00:16:47.133295 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 00:16:47.134527 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 00:16:47.134632 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 00:16:47.136679 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 00:16:47.146853 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 00:16:47.148254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:16:47.150025 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 00:16:47.153936 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 00:16:47.155087 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 00:16:47.156212 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 00:16:47.157345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 00:16:47.159110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 00:16:47.163070 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 00:16:47.165318 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 00:16:47.168212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 00:16:47.170134 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 00:16:47.172149 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 00:16:47.179360 systemd-journald[1203]: Time spent on flushing to /var/log/journal/633dc625f2ff495c8ed0c49a510625a2 is 15.790ms for 987 entries. Sep 12 00:16:47.179360 systemd-journald[1203]: System Journal (/var/log/journal/633dc625f2ff495c8ed0c49a510625a2) is 8M, max 195.6M, 187.6M free. Sep 12 00:16:47.419833 systemd-journald[1203]: Received client request to flush runtime journal. Sep 12 00:16:47.419969 kernel: loop0: detected capacity change from 0 to 224512 Sep 12 00:16:47.420144 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 00:16:47.420377 kernel: loop1: detected capacity change from 0 to 113872 Sep 12 00:16:47.420588 kernel: loop2: detected capacity change from 0 to 146240 Sep 12 00:16:47.299473 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 00:16:47.400507 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 00:16:47.403050 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 00:16:47.407272 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 00:16:47.411800 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 00:16:47.414353 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 00:16:47.501084 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 00:16:47.507910 kernel: loop3: detected capacity change from 0 to 224512 Sep 12 00:16:47.526867 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 00:16:47.534906 kernel: loop4: detected capacity change from 0 to 113872 Sep 12 00:16:47.542367 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 12 00:16:47.542385 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Sep 12 00:16:47.547895 kernel: loop5: detected capacity change from 0 to 146240 Sep 12 00:16:47.551821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 00:16:47.560887 (sd-merge)[1270]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 00:16:47.561481 (sd-merge)[1270]: Merged extensions into '/usr'. Sep 12 00:16:47.603588 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 00:16:47.603617 systemd[1]: Reloading... Sep 12 00:16:47.685094 zram_generator::config[1298]: No configuration found. Sep 12 00:16:47.865669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 00:16:47.903926 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 00:16:47.972198 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 00:16:47.973086 systemd[1]: Reloading finished in 368 ms. Sep 12 00:16:48.012308 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 00:16:48.014313 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 00:16:48.033543 systemd[1]: Starting ensure-sysext.service... Sep 12 00:16:48.035585 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 00:16:48.056269 systemd[1]: Reload requested from client PID 1336 ('systemctl') (unit ensure-sysext.service)... Sep 12 00:16:48.056287 systemd[1]: Reloading... Sep 12 00:16:48.064739 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 00:16:48.064906 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 00:16:48.065962 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 00:16:48.066231 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 00:16:48.067159 systemd-tmpfiles[1337]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 00:16:48.067423 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 12 00:16:48.067497 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Sep 12 00:16:48.074818 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 00:16:48.074833 systemd-tmpfiles[1337]: Skipping /boot Sep 12 00:16:48.097800 systemd-tmpfiles[1337]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 00:16:48.097819 systemd-tmpfiles[1337]: Skipping /boot Sep 12 00:16:48.122989 zram_generator::config[1364]: No configuration found. Sep 12 00:16:48.329549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 00:16:48.426051 systemd[1]: Reloading finished in 369 ms. Sep 12 00:16:48.450784 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 00:16:48.473785 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 00:16:48.484318 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 00:16:48.488707 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 00:16:48.491227 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 00:16:48.508600 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 00:16:48.513978 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 00:16:48.517555 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 00:16:48.523465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:16:48.523709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:16:48.528157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:16:48.531189 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:16:48.535226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:16:48.536803 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:16:48.536962 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:16:48.548524 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 00:16:48.550654 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:16:48.552562 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 00:16:48.554805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:16:48.555318 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:16:48.557377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:16:48.557594 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:16:48.559782 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:16:48.560099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:16:48.573070 systemd-udevd[1407]: Using default interface naming scheme 'v255'. Sep 12 00:16:48.608646 augenrules[1435]: No rules Sep 12 00:16:48.609639 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 00:16:48.612579 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 00:16:48.613155 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 00:16:48.615399 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 00:16:48.624953 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:16:48.626153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:16:48.629203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:16:48.636618 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:16:48.656043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:16:48.657629 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:16:48.657780 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:16:48.702147 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 00:16:48.705186 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 00:16:48.706490 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 00:16:48.706581 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:16:48.708473 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 00:16:48.710995 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 00:16:48.713699 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:16:48.714450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:16:48.717652 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:16:48.717867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:16:48.721986 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:16:48.722367 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:16:48.742003 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 00:16:48.751329 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:16:48.757206 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 00:16:48.759084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 00:16:48.761325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 00:16:48.772234 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 00:16:48.776190 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 00:16:48.779086 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 00:16:48.780478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 00:16:48.780525 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 00:16:48.780622 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 00:16:48.780654 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 00:16:48.781300 systemd[1]: Finished ensure-sysext.service. Sep 12 00:16:48.786466 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 00:16:48.795904 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 00:16:48.805829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 00:16:48.807344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 00:16:48.809350 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 00:16:48.809644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 00:16:48.811464 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 00:16:48.813425 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 00:16:48.820165 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 00:16:48.822176 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 00:16:48.822531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 00:16:48.832898 augenrules[1488]: /sbin/augenrules: No change Sep 12 00:16:48.833489 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 00:16:48.842908 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 00:16:48.849640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 00:16:48.863569 augenrules[1525]: No rules Sep 12 00:16:48.866185 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 00:16:48.867889 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 00:16:48.878898 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 00:16:48.881859 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 00:16:48.883978 kernel: ACPI: button: Power Button [PWRF] Sep 12 00:16:48.885895 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 00:16:48.885793 systemd-resolved[1406]: Positive Trust Anchors: Sep 12 00:16:48.885813 systemd-resolved[1406]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 00:16:48.885845 systemd-resolved[1406]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 00:16:48.890779 systemd-resolved[1406]: Defaulting to hostname 'linux'. Sep 12 00:16:48.892522 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 00:16:48.893740 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 00:16:48.925949 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 00:16:48.928971 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 00:16:48.984771 systemd-networkd[1478]: lo: Link UP Sep 12 00:16:48.984787 systemd-networkd[1478]: lo: Gained carrier Sep 12 00:16:48.986973 systemd-networkd[1478]: Enumeration completed Sep 12 00:16:48.987110 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 00:16:48.987393 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:16:48.987405 systemd-networkd[1478]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 00:16:48.988178 systemd-networkd[1478]: eth0: Link UP Sep 12 00:16:48.988333 systemd-networkd[1478]: eth0: Gained carrier Sep 12 00:16:48.988348 systemd-networkd[1478]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 00:16:48.989015 systemd[1]: Reached target network.target - Network. Sep 12 00:16:48.993038 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 00:16:48.997676 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 00:16:49.017282 systemd-networkd[1478]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 00:16:49.138202 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 00:16:49.140691 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 00:16:49.143265 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 00:16:49.741664 systemd-resolved[1406]: Clock change detected. Flushing caches. Sep 12 00:16:49.741970 systemd-timesyncd[1497]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 00:16:49.742019 systemd-timesyncd[1497]: Initial clock synchronization to Fri 2025-09-12 00:16:49.741583 UTC. Sep 12 00:16:49.742578 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 00:16:49.743916 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 00:16:49.745189 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 00:16:49.746350 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 00:16:49.747645 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 00:16:49.747688 systemd[1]: Reached target paths.target - Path Units. Sep 12 00:16:49.748658 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 00:16:49.749950 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 00:16:49.752963 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 00:16:49.754265 systemd[1]: Reached target timers.target - Timer Units. Sep 12 00:16:49.757764 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 00:16:49.758846 kernel: kvm_amd: TSC scaling supported Sep 12 00:16:49.758876 kernel: kvm_amd: Nested Virtualization enabled Sep 12 00:16:49.758889 kernel: kvm_amd: Nested Paging enabled Sep 12 00:16:49.761915 kernel: kvm_amd: LBR virtualization supported Sep 12 00:16:49.761957 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 00:16:49.761978 kernel: kvm_amd: Virtual GIF supported Sep 12 00:16:49.767806 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 00:16:49.776150 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 00:16:49.777701 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 00:16:49.780503 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 00:16:49.788929 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 00:16:49.791303 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 00:16:49.794011 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 00:16:49.806288 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 00:16:49.807652 systemd[1]: Reached target basic.target - Basic System. Sep 12 00:16:49.809348 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 00:16:49.809548 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 00:16:49.813803 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 00:16:49.816217 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 00:16:49.818450 kernel: EDAC MC: Ver: 3.0.0 Sep 12 00:16:49.819238 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 00:16:49.827213 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 00:16:49.830314 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 00:16:49.831483 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 00:16:49.832761 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 00:16:49.836493 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 00:16:49.838765 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 00:16:49.841752 jq[1559]: false Sep 12 00:16:49.840813 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 00:16:49.842970 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 00:16:49.849902 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing passwd entry cache Sep 12 00:16:49.849912 oslogin_cache_refresh[1561]: Refreshing passwd entry cache Sep 12 00:16:49.851286 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 00:16:49.857004 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 00:16:49.859451 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 00:16:49.861631 extend-filesystems[1560]: Found /dev/vda6 Sep 12 00:16:49.863633 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting users, quitting Sep 12 00:16:49.863633 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 00:16:49.863633 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing group entry cache Sep 12 00:16:49.863084 oslogin_cache_refresh[1561]: Failure getting users, quitting Sep 12 00:16:49.863107 oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 00:16:49.863185 oslogin_cache_refresh[1561]: Refreshing group entry cache Sep 12 00:16:49.868311 extend-filesystems[1560]: Found /dev/vda9 Sep 12 00:16:49.865570 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 00:16:49.867061 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 00:16:49.923097 oslogin_cache_refresh[1561]: Failure getting groups, quitting Sep 12 00:16:49.924806 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting groups, quitting Sep 12 00:16:49.924806 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 00:16:49.887498 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 00:16:49.925042 extend-filesystems[1560]: Checking size of /dev/vda9 Sep 12 00:16:49.923123 oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 00:16:49.932213 update_engine[1579]: I20250912 00:16:49.894973 1579 main.cc:92] Flatcar Update Engine starting Sep 12 00:16:49.896092 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 00:16:49.932967 jq[1581]: true Sep 12 00:16:49.897901 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 00:16:49.898156 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 00:16:49.933380 jq[1587]: true Sep 12 00:16:49.898628 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 00:16:49.898962 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 00:16:49.900489 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 00:16:49.900753 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 00:16:49.924758 (ntainerd)[1588]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 00:16:49.931673 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 00:16:49.932000 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 00:16:49.944840 extend-filesystems[1560]: Resized partition /dev/vda9 Sep 12 00:16:49.957798 extend-filesystems[1610]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 00:16:49.964783 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 00:16:50.001647 tar[1601]: linux-amd64/LICENSE Sep 12 00:16:50.001647 tar[1601]: linux-amd64/helm Sep 12 00:16:50.014822 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 00:16:50.085254 extend-filesystems[1610]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 00:16:50.085254 extend-filesystems[1610]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 00:16:50.085254 extend-filesystems[1610]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 00:16:50.083331 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 00:16:50.090938 extend-filesystems[1560]: Resized filesystem in /dev/vda9 Sep 12 00:16:50.083682 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 00:16:50.104521 bash[1619]: Updated "/home/core/.ssh/authorized_keys" Sep 12 00:16:50.116617 dbus-daemon[1556]: [system] SELinux support is enabled Sep 12 00:16:50.124059 update_engine[1579]: I20250912 00:16:50.123905 1579 update_check_scheduler.cc:74] Next update check in 5m52s Sep 12 00:16:50.125102 systemd-logind[1570]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 00:16:50.125141 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 00:16:50.127349 systemd-logind[1570]: New seat seat0. Sep 12 00:16:50.222471 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 00:16:50.339661 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 00:16:50.343073 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 00:16:50.344521 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 00:16:50.345871 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 00:16:50.347325 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 00:16:50.361923 dbus-daemon[1556]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 00:16:50.378681 systemd[1]: Started update-engine.service - Update Engine. Sep 12 00:16:50.382885 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 00:16:50.389771 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 00:16:50.390214 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 00:16:50.390355 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 00:16:50.391737 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 00:16:50.391856 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 00:16:50.396609 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 00:16:50.420672 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 00:16:50.420960 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 00:16:50.427957 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 00:16:50.500296 containerd[1588]: time="2025-09-12T00:16:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 00:16:50.501098 containerd[1588]: time="2025-09-12T00:16:50.501056879Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 12 00:16:50.511549 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 00:16:50.515433 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 00:16:50.515646 containerd[1588]: time="2025-09-12T00:16:50.515594211Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.727µs" Sep 12 00:16:50.515804 containerd[1588]: time="2025-09-12T00:16:50.515787013Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 00:16:50.515863 containerd[1588]: time="2025-09-12T00:16:50.515850733Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 00:16:50.516134 containerd[1588]: time="2025-09-12T00:16:50.516113265Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 00:16:50.516213 containerd[1588]: time="2025-09-12T00:16:50.516198675Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 00:16:50.516302 containerd[1588]: time="2025-09-12T00:16:50.516287231Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 00:16:50.516535 containerd[1588]: time="2025-09-12T00:16:50.516512083Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 00:16:50.516591 containerd[1588]: time="2025-09-12T00:16:50.516578437Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 00:16:50.516968 containerd[1588]: time="2025-09-12T00:16:50.516946878Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 00:16:50.517027 containerd[1588]: time="2025-09-12T00:16:50.517012922Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 00:16:50.517086 containerd[1588]: time="2025-09-12T00:16:50.517071752Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 00:16:50.517140 containerd[1588]: time="2025-09-12T00:16:50.517127407Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 00:16:50.517315 containerd[1588]: time="2025-09-12T00:16:50.517296995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 00:16:50.517947 containerd[1588]: time="2025-09-12T00:16:50.517922929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 00:16:50.518049 containerd[1588]: time="2025-09-12T00:16:50.518032755Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 00:16:50.518111 containerd[1588]: time="2025-09-12T00:16:50.518095953Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 00:16:50.518200 containerd[1588]: time="2025-09-12T00:16:50.518183487Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 00:16:50.518650 containerd[1588]: time="2025-09-12T00:16:50.518633100Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 00:16:50.518794 containerd[1588]: time="2025-09-12T00:16:50.518777211Z" level=info msg="metadata content store policy set" policy=shared Sep 12 00:16:50.518967 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 00:16:50.520003 locksmithd[1642]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 00:16:50.520601 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 00:16:50.642007 containerd[1588]: time="2025-09-12T00:16:50.641940235Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 00:16:50.642120 containerd[1588]: time="2025-09-12T00:16:50.642053818Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 00:16:50.642120 containerd[1588]: time="2025-09-12T00:16:50.642071221Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 00:16:50.642120 containerd[1588]: time="2025-09-12T00:16:50.642084486Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 00:16:50.642120 containerd[1588]: time="2025-09-12T00:16:50.642107479Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642209270Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642240559Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642255096Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642277077Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642286675Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642294740Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642310941Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642524872Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642548145Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642568413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642581728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642592499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642603399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642615191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 00:16:50.643705 containerd[1588]: time="2025-09-12T00:16:50.642631101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 00:16:50.644101 containerd[1588]: time="2025-09-12T00:16:50.642684892Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 00:16:50.644101 containerd[1588]: time="2025-09-12T00:16:50.642820466Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 00:16:50.644101 containerd[1588]: time="2025-09-12T00:16:50.642840804Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 00:16:50.644101 containerd[1588]: time="2025-09-12T00:16:50.642944048Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 00:16:50.644101 containerd[1588]: time="2025-09-12T00:16:50.642973894Z" level=info msg="Start snapshots syncer" Sep 12 00:16:50.644101 containerd[1588]: time="2025-09-12T00:16:50.643011635Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 00:16:50.644207 containerd[1588]: time="2025-09-12T00:16:50.643334340Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 00:16:50.644207 containerd[1588]: time="2025-09-12T00:16:50.643418437Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 00:16:50.644371 containerd[1588]: time="2025-09-12T00:16:50.643507655Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 00:16:50.644371 containerd[1588]: time="2025-09-12T00:16:50.643615998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 00:16:50.644371 containerd[1588]: time="2025-09-12T00:16:50.643636687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 00:16:50.644371 containerd[1588]: time="2025-09-12T00:16:50.643647046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 00:16:50.644371 containerd[1588]: time="2025-09-12T00:16:50.643660061Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 00:16:50.644371 containerd[1588]: time="2025-09-12T00:16:50.644147284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 00:16:50.644371 containerd[1588]: time="2025-09-12T00:16:50.644292326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 00:16:50.644371 containerd[1588]: time="2025-09-12T00:16:50.644312584Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 00:16:50.644519 containerd[1588]: time="2025-09-12T00:16:50.644411770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.644439743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.644899094Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.644978824Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645003771Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645014390Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645030891Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645043735Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645055668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645074163Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645123906Z" level=info msg="runtime interface created" Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645131510Z" level=info msg="created NRI interface" Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645144735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645163931Z" level=info msg="Connect containerd service" Sep 12 00:16:50.645577 containerd[1588]: time="2025-09-12T00:16:50.645215618Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 00:16:50.647352 containerd[1588]: time="2025-09-12T00:16:50.647311098Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 00:16:50.898519 tar[1601]: linux-amd64/README.md Sep 12 00:16:50.921761 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 00:16:51.008033 containerd[1588]: time="2025-09-12T00:16:51.007976484Z" level=info msg="Start subscribing containerd event" Sep 12 00:16:51.008107 containerd[1588]: time="2025-09-12T00:16:51.008069649Z" level=info msg="Start recovering state" Sep 12 00:16:51.008267 containerd[1588]: time="2025-09-12T00:16:51.008243374Z" level=info msg="Start event monitor" Sep 12 00:16:51.008309 containerd[1588]: time="2025-09-12T00:16:51.008271678Z" level=info msg="Start cni network conf syncer for default" Sep 12 00:16:51.008309 containerd[1588]: time="2025-09-12T00:16:51.008283239Z" level=info msg="Start streaming server" Sep 12 00:16:51.008309 containerd[1588]: time="2025-09-12T00:16:51.008304068Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 00:16:51.008384 containerd[1588]: time="2025-09-12T00:16:51.008317674Z" level=info msg="runtime interface starting up..." Sep 12 00:16:51.008384 containerd[1588]: time="2025-09-12T00:16:51.008325899Z" level=info msg="starting plugins..." Sep 12 00:16:51.008384 containerd[1588]: time="2025-09-12T00:16:51.008348301Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 00:16:51.008384 containerd[1588]: time="2025-09-12T00:16:51.008350856Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 00:16:51.008526 containerd[1588]: time="2025-09-12T00:16:51.008452487Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 00:16:51.008612 containerd[1588]: time="2025-09-12T00:16:51.008595004Z" level=info msg="containerd successfully booted in 0.509649s" Sep 12 00:16:51.008733 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 00:16:51.099050 systemd-networkd[1478]: eth0: Gained IPv6LL Sep 12 00:16:51.102744 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 00:16:51.104608 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 00:16:51.107658 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 00:16:51.110248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:16:51.112537 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 00:16:51.140234 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 00:16:51.140529 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 00:16:51.142312 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 00:16:51.144544 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 00:16:52.837119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:16:52.839418 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 00:16:52.841119 systemd[1]: Startup finished in 3.304s (kernel) + 7.569s (initrd) + 6.103s (userspace) = 16.976s. Sep 12 00:16:52.940182 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 00:16:52.947469 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 00:16:52.949125 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:60858.service - OpenSSH per-connection server daemon (10.0.0.1:60858). Sep 12 00:16:53.007153 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 60858 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:16:53.008965 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:16:53.016164 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 00:16:53.017394 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 00:16:53.025169 systemd-logind[1570]: New session 1 of user core. Sep 12 00:16:53.040815 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 00:16:53.043994 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 00:16:53.060081 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 00:16:53.062527 systemd-logind[1570]: New session c1 of user core. Sep 12 00:16:53.291263 systemd[1713]: Queued start job for default target default.target. Sep 12 00:16:53.343351 systemd[1713]: Created slice app.slice - User Application Slice. Sep 12 00:16:53.343390 systemd[1713]: Reached target paths.target - Paths. Sep 12 00:16:53.343445 systemd[1713]: Reached target timers.target - Timers. Sep 12 00:16:53.345166 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 00:16:53.358555 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 00:16:53.358704 systemd[1713]: Reached target sockets.target - Sockets. Sep 12 00:16:53.358776 systemd[1713]: Reached target basic.target - Basic System. Sep 12 00:16:53.358830 systemd[1713]: Reached target default.target - Main User Target. Sep 12 00:16:53.358870 systemd[1713]: Startup finished in 289ms. Sep 12 00:16:53.359245 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 00:16:53.361553 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 00:16:53.427905 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:60862.service - OpenSSH per-connection server daemon (10.0.0.1:60862). Sep 12 00:16:53.533346 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 60862 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:16:53.535222 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:16:53.540179 systemd-logind[1570]: New session 2 of user core. Sep 12 00:16:53.549898 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 00:16:53.608338 sshd[1728]: Connection closed by 10.0.0.1 port 60862 Sep 12 00:16:53.608951 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Sep 12 00:16:53.655616 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:60862.service: Deactivated successfully. Sep 12 00:16:53.657451 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 00:16:53.659001 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Sep 12 00:16:53.662488 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:60878.service - OpenSSH per-connection server daemon (10.0.0.1:60878). Sep 12 00:16:53.663490 systemd-logind[1570]: Removed session 2. Sep 12 00:16:53.711389 kubelet[1699]: E0912 00:16:53.711300 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 00:16:53.715247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 00:16:53.715468 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 00:16:53.715901 systemd[1]: kubelet.service: Consumed 2.301s CPU time, 265.3M memory peak. Sep 12 00:16:53.716359 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 60878 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:16:53.718211 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:16:53.722849 systemd-logind[1570]: New session 3 of user core. Sep 12 00:16:53.732936 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 00:16:53.783058 sshd[1737]: Connection closed by 10.0.0.1 port 60878 Sep 12 00:16:53.783453 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Sep 12 00:16:53.803611 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:60878.service: Deactivated successfully. Sep 12 00:16:53.805624 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 00:16:53.806444 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Sep 12 00:16:53.809971 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:60890.service - OpenSSH per-connection server daemon (10.0.0.1:60890). Sep 12 00:16:53.810945 systemd-logind[1570]: Removed session 3. Sep 12 00:16:53.872133 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 60890 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:16:53.873815 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:16:53.878361 systemd-logind[1570]: New session 4 of user core. Sep 12 00:16:53.887817 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 00:16:53.941076 sshd[1745]: Connection closed by 10.0.0.1 port 60890 Sep 12 00:16:53.941461 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 12 00:16:53.953528 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:60890.service: Deactivated successfully. Sep 12 00:16:53.955364 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 00:16:53.956078 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Sep 12 00:16:53.958879 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:60904.service - OpenSSH per-connection server daemon (10.0.0.1:60904). Sep 12 00:16:53.959639 systemd-logind[1570]: Removed session 4. Sep 12 00:16:54.007680 sshd[1751]: Accepted publickey for core from 10.0.0.1 port 60904 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:16:54.009325 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:16:54.013810 systemd-logind[1570]: New session 5 of user core. Sep 12 00:16:54.023916 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 00:16:54.083855 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 00:16:54.084266 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:16:54.104383 sudo[1754]: pam_unix(sudo:session): session closed for user root Sep 12 00:16:54.106435 sshd[1753]: Connection closed by 10.0.0.1 port 60904 Sep 12 00:16:54.106841 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Sep 12 00:16:54.116238 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:60904.service: Deactivated successfully. Sep 12 00:16:54.118352 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 00:16:54.119206 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Sep 12 00:16:54.122915 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:60920.service - OpenSSH per-connection server daemon (10.0.0.1:60920). Sep 12 00:16:54.123667 systemd-logind[1570]: Removed session 5. Sep 12 00:16:54.181223 sshd[1760]: Accepted publickey for core from 10.0.0.1 port 60920 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:16:54.182803 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:16:54.187028 systemd-logind[1570]: New session 6 of user core. Sep 12 00:16:54.197846 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 00:16:54.251171 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 00:16:54.251485 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:16:54.530230 sudo[1764]: pam_unix(sudo:session): session closed for user root Sep 12 00:16:54.536392 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 00:16:54.536679 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:16:54.546166 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 00:16:54.589423 augenrules[1786]: No rules Sep 12 00:16:54.591141 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 00:16:54.591464 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 00:16:54.592736 sudo[1763]: pam_unix(sudo:session): session closed for user root Sep 12 00:16:54.594433 sshd[1762]: Connection closed by 10.0.0.1 port 60920 Sep 12 00:16:54.594741 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Sep 12 00:16:54.605793 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:60920.service: Deactivated successfully. Sep 12 00:16:54.607557 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 00:16:54.608358 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Sep 12 00:16:54.611257 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:60936.service - OpenSSH per-connection server daemon (10.0.0.1:60936). Sep 12 00:16:54.612042 systemd-logind[1570]: Removed session 6. Sep 12 00:16:54.667756 sshd[1795]: Accepted publickey for core from 10.0.0.1 port 60936 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:16:54.669615 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:16:54.674520 systemd-logind[1570]: New session 7 of user core. Sep 12 00:16:54.691857 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 00:16:54.745036 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 00:16:54.745369 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 00:16:55.248607 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 00:16:55.270160 (dockerd)[1819]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 00:16:55.936676 dockerd[1819]: time="2025-09-12T00:16:55.936578312Z" level=info msg="Starting up" Sep 12 00:16:55.937590 dockerd[1819]: time="2025-09-12T00:16:55.937556947Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 00:16:56.642086 dockerd[1819]: time="2025-09-12T00:16:56.642025757Z" level=info msg="Loading containers: start." Sep 12 00:16:56.652871 kernel: Initializing XFRM netlink socket Sep 12 00:16:56.982242 systemd-networkd[1478]: docker0: Link UP Sep 12 00:16:56.988310 dockerd[1819]: time="2025-09-12T00:16:56.988263086Z" level=info msg="Loading containers: done." Sep 12 00:16:57.007008 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1914943167-merged.mount: Deactivated successfully. Sep 12 00:16:57.008410 dockerd[1819]: time="2025-09-12T00:16:57.008359183Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 00:16:57.008481 dockerd[1819]: time="2025-09-12T00:16:57.008470652Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 12 00:16:57.008605 dockerd[1819]: time="2025-09-12T00:16:57.008589445Z" level=info msg="Initializing buildkit" Sep 12 00:16:57.040971 dockerd[1819]: time="2025-09-12T00:16:57.040895050Z" level=info msg="Completed buildkit initialization" Sep 12 00:16:57.046558 dockerd[1819]: time="2025-09-12T00:16:57.046499941Z" level=info msg="Daemon has completed initialization" Sep 12 00:16:57.046685 dockerd[1819]: time="2025-09-12T00:16:57.046627059Z" level=info msg="API listen on /run/docker.sock" Sep 12 00:16:57.046783 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 00:16:59.099769 containerd[1588]: time="2025-09-12T00:16:59.099688512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 00:16:59.818215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2241639957.mount: Deactivated successfully. Sep 12 00:17:01.406120 containerd[1588]: time="2025-09-12T00:17:01.406030657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:01.425034 containerd[1588]: time="2025-09-12T00:17:01.424980335Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 00:17:01.435298 containerd[1588]: time="2025-09-12T00:17:01.435249543Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:01.447869 containerd[1588]: time="2025-09-12T00:17:01.447800701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:01.449232 containerd[1588]: time="2025-09-12T00:17:01.449159740Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.349390666s" Sep 12 00:17:01.449232 containerd[1588]: time="2025-09-12T00:17:01.449215574Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 00:17:01.450518 containerd[1588]: time="2025-09-12T00:17:01.450483712Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 00:17:03.549730 containerd[1588]: time="2025-09-12T00:17:03.549667551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:03.550847 containerd[1588]: time="2025-09-12T00:17:03.550828589Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 00:17:03.552282 containerd[1588]: time="2025-09-12T00:17:03.552256978Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:03.554888 containerd[1588]: time="2025-09-12T00:17:03.554856603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:03.555786 containerd[1588]: time="2025-09-12T00:17:03.555762872Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.105244946s" Sep 12 00:17:03.555835 containerd[1588]: time="2025-09-12T00:17:03.555791075Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 00:17:03.556374 containerd[1588]: time="2025-09-12T00:17:03.556354191Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 00:17:03.864361 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 00:17:03.866216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:04.180870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:04.194006 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 00:17:04.311919 kubelet[2100]: E0912 00:17:04.311832 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 00:17:04.319969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 00:17:04.320223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 00:17:04.320729 systemd[1]: kubelet.service: Consumed 390ms CPU time, 110.9M memory peak. Sep 12 00:17:06.717860 containerd[1588]: time="2025-09-12T00:17:06.717795562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:06.718602 containerd[1588]: time="2025-09-12T00:17:06.718553223Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 00:17:06.720738 containerd[1588]: time="2025-09-12T00:17:06.720561049Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:06.724114 containerd[1588]: time="2025-09-12T00:17:06.724039672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:06.725056 containerd[1588]: time="2025-09-12T00:17:06.725029378Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 3.168648978s" Sep 12 00:17:06.725056 containerd[1588]: time="2025-09-12T00:17:06.725055147Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 00:17:06.725512 containerd[1588]: time="2025-09-12T00:17:06.725476507Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 00:17:08.029361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344676753.mount: Deactivated successfully. Sep 12 00:17:09.385424 containerd[1588]: time="2025-09-12T00:17:09.385352303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:09.455961 containerd[1588]: time="2025-09-12T00:17:09.455892662Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 00:17:09.494911 containerd[1588]: time="2025-09-12T00:17:09.494799476Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:09.578927 containerd[1588]: time="2025-09-12T00:17:09.578812681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:09.579392 containerd[1588]: time="2025-09-12T00:17:09.579367872Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.853858554s" Sep 12 00:17:09.579435 containerd[1588]: time="2025-09-12T00:17:09.579396025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 00:17:09.579946 containerd[1588]: time="2025-09-12T00:17:09.579921119Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 00:17:11.840194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218855302.mount: Deactivated successfully. Sep 12 00:17:13.010886 containerd[1588]: time="2025-09-12T00:17:13.010818697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:13.011601 containerd[1588]: time="2025-09-12T00:17:13.011547724Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 00:17:13.012834 containerd[1588]: time="2025-09-12T00:17:13.012800614Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:13.016334 containerd[1588]: time="2025-09-12T00:17:13.016286551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:13.017401 containerd[1588]: time="2025-09-12T00:17:13.017333424Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.437380786s" Sep 12 00:17:13.017401 containerd[1588]: time="2025-09-12T00:17:13.017379220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 00:17:13.017934 containerd[1588]: time="2025-09-12T00:17:13.017893204Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 00:17:13.436337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102424705.mount: Deactivated successfully. Sep 12 00:17:13.442540 containerd[1588]: time="2025-09-12T00:17:13.442480317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 00:17:13.443220 containerd[1588]: time="2025-09-12T00:17:13.443158879Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 00:17:13.444323 containerd[1588]: time="2025-09-12T00:17:13.444276776Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 00:17:13.448662 containerd[1588]: time="2025-09-12T00:17:13.448607407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 00:17:13.449227 containerd[1588]: time="2025-09-12T00:17:13.449187826Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 431.262772ms" Sep 12 00:17:13.449295 containerd[1588]: time="2025-09-12T00:17:13.449231077Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 00:17:13.449940 containerd[1588]: time="2025-09-12T00:17:13.449909790Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 00:17:14.219433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003581437.mount: Deactivated successfully. Sep 12 00:17:14.366022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 00:17:14.372855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:14.749386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:14.767038 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 00:17:14.822283 kubelet[2196]: E0912 00:17:14.822230 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 00:17:14.901393 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 00:17:14.901633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 00:17:14.902100 systemd[1]: kubelet.service: Consumed 301ms CPU time, 110.4M memory peak. Sep 12 00:17:16.851653 containerd[1588]: time="2025-09-12T00:17:16.851480339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:16.852421 containerd[1588]: time="2025-09-12T00:17:16.852299104Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 00:17:16.853408 containerd[1588]: time="2025-09-12T00:17:16.853371144Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:16.855820 containerd[1588]: time="2025-09-12T00:17:16.855779371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:16.856782 containerd[1588]: time="2025-09-12T00:17:16.856742377Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.406802411s" Sep 12 00:17:16.856782 containerd[1588]: time="2025-09-12T00:17:16.856773074Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 00:17:18.818426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:18.818681 systemd[1]: kubelet.service: Consumed 301ms CPU time, 110.4M memory peak. Sep 12 00:17:18.821402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:18.850971 systemd[1]: Reload requested from client PID 2275 ('systemctl') (unit session-7.scope)... Sep 12 00:17:18.850990 systemd[1]: Reloading... Sep 12 00:17:19.016808 zram_generator::config[2320]: No configuration found. Sep 12 00:17:19.416624 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 00:17:19.567266 systemd[1]: Reloading finished in 715 ms. Sep 12 00:17:19.637863 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 00:17:19.638003 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 00:17:19.638382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:19.638446 systemd[1]: kubelet.service: Consumed 182ms CPU time, 98.3M memory peak. Sep 12 00:17:19.640399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:19.845537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:19.857030 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 00:17:19.896057 kubelet[2366]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:17:19.896579 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 00:17:19.896579 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:17:19.896796 kubelet[2366]: I0912 00:17:19.896745 2366 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 00:17:20.475972 kubelet[2366]: I0912 00:17:20.475919 2366 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 00:17:20.475972 kubelet[2366]: I0912 00:17:20.475952 2366 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 00:17:20.476219 kubelet[2366]: I0912 00:17:20.476207 2366 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 00:17:20.504132 kubelet[2366]: E0912 00:17:20.504077 2366 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:20.504785 kubelet[2366]: I0912 00:17:20.504745 2366 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 00:17:20.534156 kubelet[2366]: I0912 00:17:20.534128 2366 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 00:17:20.539786 kubelet[2366]: I0912 00:17:20.539502 2366 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 00:17:20.540497 kubelet[2366]: I0912 00:17:20.540458 2366 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 00:17:20.540876 kubelet[2366]: I0912 00:17:20.540507 2366 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 00:17:20.541009 kubelet[2366]: I0912 00:17:20.540897 2366 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 00:17:20.541009 kubelet[2366]: I0912 00:17:20.540909 2366 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 00:17:20.541107 kubelet[2366]: I0912 00:17:20.541066 2366 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:17:20.544143 kubelet[2366]: I0912 00:17:20.544103 2366 kubelet.go:446] "Attempting to sync node with API server" Sep 12 00:17:20.544143 kubelet[2366]: I0912 00:17:20.544141 2366 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 00:17:20.544361 kubelet[2366]: I0912 00:17:20.544173 2366 kubelet.go:352] "Adding apiserver pod source" Sep 12 00:17:20.544361 kubelet[2366]: I0912 00:17:20.544221 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 00:17:20.546913 kubelet[2366]: I0912 00:17:20.546883 2366 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 12 00:17:20.547432 kubelet[2366]: W0912 00:17:20.547379 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 12 00:17:20.547478 kubelet[2366]: E0912 00:17:20.547431 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:20.547811 kubelet[2366]: I0912 00:17:20.547794 2366 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 00:17:20.548605 kubelet[2366]: W0912 00:17:20.548576 2366 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 00:17:20.548809 kubelet[2366]: W0912 00:17:20.548705 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 12 00:17:20.548877 kubelet[2366]: E0912 00:17:20.548853 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:20.550802 kubelet[2366]: I0912 00:17:20.550774 2366 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 00:17:20.550856 kubelet[2366]: I0912 00:17:20.550813 2366 server.go:1287] "Started kubelet" Sep 12 00:17:20.551420 kubelet[2366]: I0912 00:17:20.551298 2366 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 00:17:20.551958 kubelet[2366]: I0912 00:17:20.551932 2366 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 00:17:20.552048 kubelet[2366]: I0912 00:17:20.552010 2366 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 00:17:20.552266 kubelet[2366]: I0912 00:17:20.552241 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 00:17:20.554636 kubelet[2366]: I0912 00:17:20.554184 2366 server.go:479] "Adding debug handlers to kubelet server" Sep 12 00:17:20.556006 kubelet[2366]: E0912 00:17:20.555977 2366 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 00:17:20.556628 kubelet[2366]: I0912 00:17:20.556604 2366 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 00:17:20.556941 kubelet[2366]: E0912 00:17:20.556916 2366 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:17:20.556985 kubelet[2366]: I0912 00:17:20.556945 2366 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 00:17:20.557977 kubelet[2366]: I0912 00:17:20.557094 2366 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 00:17:20.557977 kubelet[2366]: I0912 00:17:20.557146 2366 reconciler.go:26] "Reconciler: start to sync state" Sep 12 00:17:20.557977 kubelet[2366]: W0912 00:17:20.557487 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 12 00:17:20.557977 kubelet[2366]: E0912 00:17:20.557526 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:20.557977 kubelet[2366]: E0912 00:17:20.557553 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" Sep 12 00:17:20.557977 kubelet[2366]: I0912 00:17:20.557724 2366 factory.go:221] Registration of the systemd container factory successfully Sep 12 00:17:20.557977 kubelet[2366]: I0912 00:17:20.557810 2366 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 00:17:20.558396 kubelet[2366]: E0912 00:17:20.556816 2366 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186460d9326208ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 00:17:20.550791407 +0000 UTC m=+0.690031716,LastTimestamp:2025-09-12 00:17:20.550791407 +0000 UTC m=+0.690031716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 00:17:20.558729 kubelet[2366]: I0912 00:17:20.558695 2366 factory.go:221] Registration of the containerd container factory successfully Sep 12 00:17:20.573007 kubelet[2366]: I0912 00:17:20.572929 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 00:17:20.574875 kubelet[2366]: I0912 00:17:20.574846 2366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 00:17:20.575777 kubelet[2366]: I0912 00:17:20.574883 2366 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 00:17:20.575777 kubelet[2366]: I0912 00:17:20.574907 2366 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 00:17:20.575777 kubelet[2366]: I0912 00:17:20.574918 2366 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 00:17:20.575777 kubelet[2366]: E0912 00:17:20.574969 2366 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 00:17:20.575777 kubelet[2366]: I0912 00:17:20.575640 2366 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 00:17:20.575777 kubelet[2366]: I0912 00:17:20.575656 2366 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 00:17:20.575777 kubelet[2366]: I0912 00:17:20.575679 2366 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:17:20.575777 kubelet[2366]: W0912 00:17:20.575659 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 12 00:17:20.575777 kubelet[2366]: E0912 00:17:20.575740 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:20.581733 kubelet[2366]: I0912 00:17:20.581689 2366 policy_none.go:49] "None policy: Start" Sep 12 00:17:20.581733 kubelet[2366]: I0912 00:17:20.581737 2366 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 00:17:20.581838 kubelet[2366]: I0912 00:17:20.581751 2366 state_mem.go:35] "Initializing new in-memory state store" Sep 12 00:17:20.588773 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 00:17:20.602058 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 00:17:20.605979 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 00:17:20.625873 kubelet[2366]: I0912 00:17:20.625806 2366 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 00:17:20.626064 kubelet[2366]: I0912 00:17:20.626028 2366 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 00:17:20.626064 kubelet[2366]: I0912 00:17:20.626041 2366 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 00:17:20.626463 kubelet[2366]: I0912 00:17:20.626348 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 00:17:20.627617 kubelet[2366]: E0912 00:17:20.627571 2366 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 00:17:20.627804 kubelet[2366]: E0912 00:17:20.627641 2366 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 00:17:20.684981 systemd[1]: Created slice kubepods-burstable-podd428eac419aeb64fce6b97233b2283c3.slice - libcontainer container kubepods-burstable-podd428eac419aeb64fce6b97233b2283c3.slice. Sep 12 00:17:20.710822 kubelet[2366]: E0912 00:17:20.710770 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:20.714271 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 00:17:20.727577 kubelet[2366]: I0912 00:17:20.727467 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:17:20.728010 kubelet[2366]: E0912 00:17:20.727957 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Sep 12 00:17:20.730510 kubelet[2366]: E0912 00:17:20.730466 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:20.733648 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 00:17:20.735799 kubelet[2366]: E0912 00:17:20.735765 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:20.758279 kubelet[2366]: E0912 00:17:20.758227 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" Sep 12 00:17:20.858743 kubelet[2366]: I0912 00:17:20.858637 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d428eac419aeb64fce6b97233b2283c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d428eac419aeb64fce6b97233b2283c3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:20.858743 kubelet[2366]: I0912 00:17:20.858698 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d428eac419aeb64fce6b97233b2283c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d428eac419aeb64fce6b97233b2283c3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:20.858743 kubelet[2366]: I0912 00:17:20.858739 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d428eac419aeb64fce6b97233b2283c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d428eac419aeb64fce6b97233b2283c3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:20.858743 kubelet[2366]: I0912 00:17:20.858763 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:20.858990 kubelet[2366]: I0912 00:17:20.858856 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:20.858990 kubelet[2366]: I0912 00:17:20.858902 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:20.858990 kubelet[2366]: I0912 00:17:20.858934 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:20.858990 kubelet[2366]: I0912 00:17:20.858954 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:20.858990 kubelet[2366]: I0912 00:17:20.858990 2366 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:20.930151 kubelet[2366]: I0912 00:17:20.930107 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:17:20.930655 kubelet[2366]: E0912 00:17:20.930617 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Sep 12 00:17:21.012643 kubelet[2366]: E0912 00:17:21.012493 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:21.013421 containerd[1588]: time="2025-09-12T00:17:21.013356893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d428eac419aeb64fce6b97233b2283c3,Namespace:kube-system,Attempt:0,}" Sep 12 00:17:21.031766 kubelet[2366]: E0912 00:17:21.031666 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:21.032359 containerd[1588]: time="2025-09-12T00:17:21.032260183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 00:17:21.036835 kubelet[2366]: E0912 00:17:21.036794 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:21.037444 containerd[1588]: time="2025-09-12T00:17:21.037396195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 00:17:21.159931 kubelet[2366]: E0912 00:17:21.159876 2366 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" Sep 12 00:17:21.195458 containerd[1588]: time="2025-09-12T00:17:21.195390771Z" level=info msg="connecting to shim f3cb680ec5e5ab74ea3c5117b1f8d98fcc7740fce12d9b133229918a4d375442" address="unix:///run/containerd/s/6267d1bcd7d569ddcaf869d85a7396ff48fa2f8e1d1c9aa5f3898d8124886096" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:17:21.213474 containerd[1588]: time="2025-09-12T00:17:21.213416336Z" level=info msg="connecting to shim 407cb8f52314572589728c159abcb86ca61c743136f1bf8270a69bc5a6780e4b" address="unix:///run/containerd/s/b79410f3fbbd3193abbed4caae3626bfdfd39411bb9f201271c1bed2e9a11f75" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:17:21.240313 containerd[1588]: time="2025-09-12T00:17:21.240248776Z" level=info msg="connecting to shim b23fd13bd1c67a85c40284b7441502ce3c338b45cbaac5deaf16967189e1af84" address="unix:///run/containerd/s/0f4b5551c414c28aaf798d0a4af9278282f1ed2b0480ccfae8f78f54f7f420a4" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:17:21.250898 systemd[1]: Started cri-containerd-407cb8f52314572589728c159abcb86ca61c743136f1bf8270a69bc5a6780e4b.scope - libcontainer container 407cb8f52314572589728c159abcb86ca61c743136f1bf8270a69bc5a6780e4b. Sep 12 00:17:21.292688 systemd[1]: Started cri-containerd-f3cb680ec5e5ab74ea3c5117b1f8d98fcc7740fce12d9b133229918a4d375442.scope - libcontainer container f3cb680ec5e5ab74ea3c5117b1f8d98fcc7740fce12d9b133229918a4d375442. Sep 12 00:17:21.318842 systemd[1]: Started cri-containerd-b23fd13bd1c67a85c40284b7441502ce3c338b45cbaac5deaf16967189e1af84.scope - libcontainer container b23fd13bd1c67a85c40284b7441502ce3c338b45cbaac5deaf16967189e1af84. Sep 12 00:17:21.336091 kubelet[2366]: I0912 00:17:21.336007 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:17:21.336417 kubelet[2366]: E0912 00:17:21.336378 2366 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" Sep 12 00:17:21.346356 containerd[1588]: time="2025-09-12T00:17:21.346265265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d428eac419aeb64fce6b97233b2283c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"407cb8f52314572589728c159abcb86ca61c743136f1bf8270a69bc5a6780e4b\"" Sep 12 00:17:21.348214 kubelet[2366]: E0912 00:17:21.348173 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:21.352943 containerd[1588]: time="2025-09-12T00:17:21.352902722Z" level=info msg="CreateContainer within sandbox \"407cb8f52314572589728c159abcb86ca61c743136f1bf8270a69bc5a6780e4b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 00:17:21.367568 containerd[1588]: time="2025-09-12T00:17:21.367495800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3cb680ec5e5ab74ea3c5117b1f8d98fcc7740fce12d9b133229918a4d375442\"" Sep 12 00:17:21.368294 kubelet[2366]: E0912 00:17:21.368266 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:21.369406 containerd[1588]: time="2025-09-12T00:17:21.369382317Z" level=info msg="Container cb7a95a12112bc03ecaf7540693c7b03a0cdb77cd97acb8fc04618f687ff83c3: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:21.370173 containerd[1588]: time="2025-09-12T00:17:21.370144437Z" level=info msg="CreateContainer within sandbox \"f3cb680ec5e5ab74ea3c5117b1f8d98fcc7740fce12d9b133229918a4d375442\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 00:17:21.382876 containerd[1588]: time="2025-09-12T00:17:21.382708439Z" level=info msg="CreateContainer within sandbox \"407cb8f52314572589728c159abcb86ca61c743136f1bf8270a69bc5a6780e4b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cb7a95a12112bc03ecaf7540693c7b03a0cdb77cd97acb8fc04618f687ff83c3\"" Sep 12 00:17:21.383773 containerd[1588]: time="2025-09-12T00:17:21.383737879Z" level=info msg="StartContainer for \"cb7a95a12112bc03ecaf7540693c7b03a0cdb77cd97acb8fc04618f687ff83c3\"" Sep 12 00:17:21.385646 containerd[1588]: time="2025-09-12T00:17:21.385612906Z" level=info msg="connecting to shim cb7a95a12112bc03ecaf7540693c7b03a0cdb77cd97acb8fc04618f687ff83c3" address="unix:///run/containerd/s/b79410f3fbbd3193abbed4caae3626bfdfd39411bb9f201271c1bed2e9a11f75" protocol=ttrpc version=3 Sep 12 00:17:21.388001 containerd[1588]: time="2025-09-12T00:17:21.387967111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b23fd13bd1c67a85c40284b7441502ce3c338b45cbaac5deaf16967189e1af84\"" Sep 12 00:17:21.389102 kubelet[2366]: E0912 00:17:21.389028 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:21.391731 containerd[1588]: time="2025-09-12T00:17:21.391670987Z" level=info msg="CreateContainer within sandbox \"b23fd13bd1c67a85c40284b7441502ce3c338b45cbaac5deaf16967189e1af84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 00:17:21.392549 containerd[1588]: time="2025-09-12T00:17:21.392490143Z" level=info msg="Container 326865655858cb25236d6f28ea0c71e11200351ac21fb7cbee07dec89dadc982: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:21.408909 containerd[1588]: time="2025-09-12T00:17:21.408828553Z" level=info msg="CreateContainer within sandbox \"f3cb680ec5e5ab74ea3c5117b1f8d98fcc7740fce12d9b133229918a4d375442\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"326865655858cb25236d6f28ea0c71e11200351ac21fb7cbee07dec89dadc982\"" Sep 12 00:17:21.409956 containerd[1588]: time="2025-09-12T00:17:21.409911835Z" level=info msg="StartContainer for \"326865655858cb25236d6f28ea0c71e11200351ac21fb7cbee07dec89dadc982\"" Sep 12 00:17:21.412131 containerd[1588]: time="2025-09-12T00:17:21.412100058Z" level=info msg="connecting to shim 326865655858cb25236d6f28ea0c71e11200351ac21fb7cbee07dec89dadc982" address="unix:///run/containerd/s/6267d1bcd7d569ddcaf869d85a7396ff48fa2f8e1d1c9aa5f3898d8124886096" protocol=ttrpc version=3 Sep 12 00:17:21.455289 containerd[1588]: time="2025-09-12T00:17:21.455086273Z" level=info msg="Container 941b75d9e54ac0a577d3736db340f81acf1fd771b4515164b7587d7ccaa5f437: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:21.471922 systemd[1]: Started cri-containerd-cb7a95a12112bc03ecaf7540693c7b03a0cdb77cd97acb8fc04618f687ff83c3.scope - libcontainer container cb7a95a12112bc03ecaf7540693c7b03a0cdb77cd97acb8fc04618f687ff83c3. Sep 12 00:17:21.476368 systemd[1]: Started cri-containerd-326865655858cb25236d6f28ea0c71e11200351ac21fb7cbee07dec89dadc982.scope - libcontainer container 326865655858cb25236d6f28ea0c71e11200351ac21fb7cbee07dec89dadc982. Sep 12 00:17:21.554525 kubelet[2366]: W0912 00:17:21.554025 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 12 00:17:21.554525 kubelet[2366]: E0912 00:17:21.554230 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:21.700564 kubelet[2366]: W0912 00:17:21.700422 2366 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused Sep 12 00:17:21.700766 kubelet[2366]: E0912 00:17:21.700579 2366 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" Sep 12 00:17:21.778046 containerd[1588]: time="2025-09-12T00:17:21.777945780Z" level=info msg="StartContainer for \"326865655858cb25236d6f28ea0c71e11200351ac21fb7cbee07dec89dadc982\" returns successfully" Sep 12 00:17:21.780090 containerd[1588]: time="2025-09-12T00:17:21.780041550Z" level=info msg="StartContainer for \"cb7a95a12112bc03ecaf7540693c7b03a0cdb77cd97acb8fc04618f687ff83c3\" returns successfully" Sep 12 00:17:21.784887 containerd[1588]: time="2025-09-12T00:17:21.784841782Z" level=info msg="CreateContainer within sandbox \"b23fd13bd1c67a85c40284b7441502ce3c338b45cbaac5deaf16967189e1af84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"941b75d9e54ac0a577d3736db340f81acf1fd771b4515164b7587d7ccaa5f437\"" Sep 12 00:17:21.785116 kubelet[2366]: E0912 00:17:21.785075 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:21.785266 kubelet[2366]: E0912 00:17:21.785244 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:21.785467 containerd[1588]: time="2025-09-12T00:17:21.785436638Z" level=info msg="StartContainer for \"941b75d9e54ac0a577d3736db340f81acf1fd771b4515164b7587d7ccaa5f437\"" Sep 12 00:17:21.787343 containerd[1588]: time="2025-09-12T00:17:21.786836483Z" level=info msg="connecting to shim 941b75d9e54ac0a577d3736db340f81acf1fd771b4515164b7587d7ccaa5f437" address="unix:///run/containerd/s/0f4b5551c414c28aaf798d0a4af9278282f1ed2b0480ccfae8f78f54f7f420a4" protocol=ttrpc version=3 Sep 12 00:17:21.812853 systemd[1]: Started cri-containerd-941b75d9e54ac0a577d3736db340f81acf1fd771b4515164b7587d7ccaa5f437.scope - libcontainer container 941b75d9e54ac0a577d3736db340f81acf1fd771b4515164b7587d7ccaa5f437. Sep 12 00:17:22.013487 containerd[1588]: time="2025-09-12T00:17:22.013416159Z" level=info msg="StartContainer for \"941b75d9e54ac0a577d3736db340f81acf1fd771b4515164b7587d7ccaa5f437\" returns successfully" Sep 12 00:17:22.138802 kubelet[2366]: I0912 00:17:22.138753 2366 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:17:22.794912 kubelet[2366]: E0912 00:17:22.794859 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:22.795097 kubelet[2366]: E0912 00:17:22.795049 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:22.795097 kubelet[2366]: E0912 00:17:22.795057 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:22.795276 kubelet[2366]: E0912 00:17:22.795243 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:22.795593 kubelet[2366]: E0912 00:17:22.795574 2366 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 00:17:22.795680 kubelet[2366]: E0912 00:17:22.795664 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:23.480377 kubelet[2366]: E0912 00:17:23.480319 2366 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 00:17:23.549231 kubelet[2366]: I0912 00:17:23.549164 2366 apiserver.go:52] "Watching apiserver" Sep 12 00:17:23.557658 kubelet[2366]: I0912 00:17:23.557595 2366 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 00:17:23.602310 kubelet[2366]: I0912 00:17:23.602258 2366 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 00:17:23.658320 kubelet[2366]: I0912 00:17:23.658259 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:23.682862 kubelet[2366]: E0912 00:17:23.682785 2366 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:23.682862 kubelet[2366]: I0912 00:17:23.682830 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:23.684378 kubelet[2366]: E0912 00:17:23.684351 2366 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:23.684378 kubelet[2366]: I0912 00:17:23.684373 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:23.685674 kubelet[2366]: E0912 00:17:23.685637 2366 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:23.795264 kubelet[2366]: I0912 00:17:23.795134 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:23.795264 kubelet[2366]: I0912 00:17:23.795199 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:23.796843 kubelet[2366]: E0912 00:17:23.796803 2366 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:23.796843 kubelet[2366]: E0912 00:17:23.796817 2366 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:23.797042 kubelet[2366]: E0912 00:17:23.796964 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:23.797145 kubelet[2366]: E0912 00:17:23.797124 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:26.485280 kubelet[2366]: I0912 00:17:26.485218 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:26.522082 kubelet[2366]: E0912 00:17:26.522042 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:26.544053 kubelet[2366]: I0912 00:17:26.543996 2366 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:26.651757 kubelet[2366]: E0912 00:17:26.651676 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:26.803252 kubelet[2366]: E0912 00:17:26.802483 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:26.803252 kubelet[2366]: E0912 00:17:26.802661 2366 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:27.657255 systemd[1]: Reload requested from client PID 2640 ('systemctl') (unit session-7.scope)... Sep 12 00:17:27.657271 systemd[1]: Reloading... Sep 12 00:17:27.749812 zram_generator::config[2683]: No configuration found. Sep 12 00:17:27.853325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 00:17:27.993061 systemd[1]: Reloading finished in 335 ms. Sep 12 00:17:28.030119 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:28.044413 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 00:17:28.044879 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:28.044976 systemd[1]: kubelet.service: Consumed 887ms CPU time, 132.5M memory peak. Sep 12 00:17:28.047737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 00:17:28.279774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 00:17:28.289150 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 00:17:28.354608 kubelet[2728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:17:28.354608 kubelet[2728]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 00:17:28.354608 kubelet[2728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 00:17:28.355067 kubelet[2728]: I0912 00:17:28.354667 2728 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 00:17:28.361704 kubelet[2728]: I0912 00:17:28.361653 2728 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 00:17:28.361704 kubelet[2728]: I0912 00:17:28.361685 2728 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 00:17:28.362094 kubelet[2728]: I0912 00:17:28.362067 2728 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 00:17:28.363634 kubelet[2728]: I0912 00:17:28.363606 2728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 00:17:28.366042 kubelet[2728]: I0912 00:17:28.366020 2728 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 00:17:28.370671 kubelet[2728]: I0912 00:17:28.370621 2728 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 00:17:28.375659 kubelet[2728]: I0912 00:17:28.375604 2728 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 00:17:28.375936 kubelet[2728]: I0912 00:17:28.375904 2728 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 00:17:28.376098 kubelet[2728]: I0912 00:17:28.375934 2728 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 00:17:28.376177 kubelet[2728]: I0912 00:17:28.376109 2728 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 00:17:28.376177 kubelet[2728]: I0912 00:17:28.376119 2728 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 00:17:28.376177 kubelet[2728]: I0912 00:17:28.376168 2728 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:17:28.376383 kubelet[2728]: I0912 00:17:28.376344 2728 kubelet.go:446] "Attempting to sync node with API server" Sep 12 00:17:28.376383 kubelet[2728]: I0912 00:17:28.376383 2728 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 00:17:28.376455 kubelet[2728]: I0912 00:17:28.376421 2728 kubelet.go:352] "Adding apiserver pod source" Sep 12 00:17:28.376455 kubelet[2728]: I0912 00:17:28.376444 2728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 00:17:28.377294 kubelet[2728]: I0912 00:17:28.377266 2728 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 12 00:17:28.377688 kubelet[2728]: I0912 00:17:28.377658 2728 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 00:17:28.378731 kubelet[2728]: I0912 00:17:28.378129 2728 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 00:17:28.378731 kubelet[2728]: I0912 00:17:28.378158 2728 server.go:1287] "Started kubelet" Sep 12 00:17:28.378926 kubelet[2728]: I0912 00:17:28.378886 2728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 00:17:28.379503 kubelet[2728]: I0912 00:17:28.379486 2728 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 00:17:28.381780 kubelet[2728]: I0912 00:17:28.380647 2728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 00:17:28.381961 kubelet[2728]: I0912 00:17:28.381937 2728 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 00:17:28.382044 kubelet[2728]: I0912 00:17:28.382018 2728 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 00:17:28.383432 kubelet[2728]: I0912 00:17:28.382893 2728 server.go:479] "Adding debug handlers to kubelet server" Sep 12 00:17:28.383592 kubelet[2728]: E0912 00:17:28.383561 2728 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 00:17:28.383631 kubelet[2728]: I0912 00:17:28.380794 2728 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 00:17:28.384964 kubelet[2728]: I0912 00:17:28.383785 2728 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 00:17:28.384964 kubelet[2728]: I0912 00:17:28.384032 2728 reconciler.go:26] "Reconciler: start to sync state" Sep 12 00:17:28.384964 kubelet[2728]: E0912 00:17:28.384086 2728 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 00:17:28.384964 kubelet[2728]: I0912 00:17:28.384416 2728 factory.go:221] Registration of the systemd container factory successfully Sep 12 00:17:28.384964 kubelet[2728]: I0912 00:17:28.384552 2728 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 00:17:28.386044 kubelet[2728]: I0912 00:17:28.386014 2728 factory.go:221] Registration of the containerd container factory successfully Sep 12 00:17:28.404839 kubelet[2728]: I0912 00:17:28.404791 2728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 00:17:28.407384 kubelet[2728]: I0912 00:17:28.407256 2728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 00:17:28.407384 kubelet[2728]: I0912 00:17:28.407292 2728 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 00:17:28.407384 kubelet[2728]: I0912 00:17:28.407318 2728 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 00:17:28.407384 kubelet[2728]: I0912 00:17:28.407328 2728 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 00:17:28.408025 kubelet[2728]: E0912 00:17:28.407396 2728 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 00:17:28.424141 kubelet[2728]: I0912 00:17:28.424118 2728 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 00:17:28.424776 kubelet[2728]: I0912 00:17:28.424250 2728 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 00:17:28.424776 kubelet[2728]: I0912 00:17:28.424272 2728 state_mem.go:36] "Initialized new in-memory state store" Sep 12 00:17:28.424776 kubelet[2728]: I0912 00:17:28.424422 2728 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 00:17:28.424776 kubelet[2728]: I0912 00:17:28.424433 2728 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 00:17:28.424776 kubelet[2728]: I0912 00:17:28.424451 2728 policy_none.go:49] "None policy: Start" Sep 12 00:17:28.424776 kubelet[2728]: I0912 00:17:28.424460 2728 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 00:17:28.424776 kubelet[2728]: I0912 00:17:28.424470 2728 state_mem.go:35] "Initializing new in-memory state store" Sep 12 00:17:28.424776 kubelet[2728]: I0912 00:17:28.424565 2728 state_mem.go:75] "Updated machine memory state" Sep 12 00:17:28.429467 kubelet[2728]: I0912 00:17:28.429432 2728 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 00:17:28.429670 kubelet[2728]: I0912 00:17:28.429642 2728 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 00:17:28.429705 kubelet[2728]: I0912 00:17:28.429664 2728 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 00:17:28.430327 kubelet[2728]: I0912 00:17:28.430178 2728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 00:17:28.431633 kubelet[2728]: E0912 00:17:28.431591 2728 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 00:17:28.508474 kubelet[2728]: I0912 00:17:28.508422 2728 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:28.508617 kubelet[2728]: I0912 00:17:28.508536 2728 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:28.508702 kubelet[2728]: I0912 00:17:28.508668 2728 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:28.535874 kubelet[2728]: I0912 00:17:28.535748 2728 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 00:17:28.585452 kubelet[2728]: I0912 00:17:28.585387 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:28.585452 kubelet[2728]: I0912 00:17:28.585435 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d428eac419aeb64fce6b97233b2283c3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d428eac419aeb64fce6b97233b2283c3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:28.585452 kubelet[2728]: I0912 00:17:28.585460 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:28.585683 kubelet[2728]: I0912 00:17:28.585481 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:28.585683 kubelet[2728]: I0912 00:17:28.585496 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d428eac419aeb64fce6b97233b2283c3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d428eac419aeb64fce6b97233b2283c3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:28.585683 kubelet[2728]: I0912 00:17:28.585517 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d428eac419aeb64fce6b97233b2283c3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d428eac419aeb64fce6b97233b2283c3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:28.585683 kubelet[2728]: I0912 00:17:28.585541 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:28.585683 kubelet[2728]: I0912 00:17:28.585556 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:28.585824 kubelet[2728]: I0912 00:17:28.585591 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:28.748990 kubelet[2728]: E0912 00:17:28.748949 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:28.982757 kubelet[2728]: E0912 00:17:28.982619 2728 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:28.982757 kubelet[2728]: E0912 00:17:28.982671 2728 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 00:17:28.982972 kubelet[2728]: E0912 00:17:28.982884 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:28.983223 kubelet[2728]: E0912 00:17:28.983202 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:29.016472 kubelet[2728]: I0912 00:17:29.016417 2728 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 00:17:29.016796 kubelet[2728]: I0912 00:17:29.016537 2728 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 00:17:29.376974 kubelet[2728]: I0912 00:17:29.376941 2728 apiserver.go:52] "Watching apiserver" Sep 12 00:17:29.384896 kubelet[2728]: I0912 00:17:29.384866 2728 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 00:17:29.419742 kubelet[2728]: I0912 00:17:29.419392 2728 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:29.419742 kubelet[2728]: I0912 00:17:29.419427 2728 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:29.420342 kubelet[2728]: E0912 00:17:29.420262 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:29.432644 kubelet[2728]: E0912 00:17:29.432237 2728 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 00:17:29.432644 kubelet[2728]: E0912 00:17:29.432549 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:29.432879 kubelet[2728]: E0912 00:17:29.432247 2728 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 00:17:29.433641 kubelet[2728]: E0912 00:17:29.433334 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:29.448509 kubelet[2728]: I0912 00:17:29.448435 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.4484115 podStartE2EDuration="1.4484115s" podCreationTimestamp="2025-09-12 00:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:17:29.446330699 +0000 UTC m=+1.150064668" watchObservedRunningTime="2025-09-12 00:17:29.4484115 +0000 UTC m=+1.152145479" Sep 12 00:17:29.463272 kubelet[2728]: I0912 00:17:29.463200 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.463180603 podStartE2EDuration="3.463180603s" podCreationTimestamp="2025-09-12 00:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:17:29.455023059 +0000 UTC m=+1.158757028" watchObservedRunningTime="2025-09-12 00:17:29.463180603 +0000 UTC m=+1.166914562" Sep 12 00:17:29.471830 kubelet[2728]: I0912 00:17:29.471764 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.4717403239999998 podStartE2EDuration="3.471740324s" podCreationTimestamp="2025-09-12 00:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:17:29.463329163 +0000 UTC m=+1.167063132" watchObservedRunningTime="2025-09-12 00:17:29.471740324 +0000 UTC m=+1.175474303" Sep 12 00:17:30.420261 kubelet[2728]: E0912 00:17:30.420219 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:30.420735 kubelet[2728]: E0912 00:17:30.420702 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:31.561327 kubelet[2728]: I0912 00:17:31.561240 2728 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 00:17:31.562878 containerd[1588]: time="2025-09-12T00:17:31.561706655Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 00:17:31.563228 kubelet[2728]: I0912 00:17:31.563199 2728 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 00:17:31.879512 systemd[1]: Created slice kubepods-besteffort-pod43ee2bf8_9bb3_42e0_bbcc_73ca7bb65976.slice - libcontainer container kubepods-besteffort-pod43ee2bf8_9bb3_42e0_bbcc_73ca7bb65976.slice. Sep 12 00:17:31.905167 kubelet[2728]: I0912 00:17:31.905119 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43ee2bf8-9bb3-42e0-bbcc-73ca7bb65976-lib-modules\") pod \"kube-proxy-9ssqs\" (UID: \"43ee2bf8-9bb3-42e0-bbcc-73ca7bb65976\") " pod="kube-system/kube-proxy-9ssqs" Sep 12 00:17:31.905167 kubelet[2728]: I0912 00:17:31.905153 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43ee2bf8-9bb3-42e0-bbcc-73ca7bb65976-kube-proxy\") pod \"kube-proxy-9ssqs\" (UID: \"43ee2bf8-9bb3-42e0-bbcc-73ca7bb65976\") " pod="kube-system/kube-proxy-9ssqs" Sep 12 00:17:31.905167 kubelet[2728]: I0912 00:17:31.905171 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43ee2bf8-9bb3-42e0-bbcc-73ca7bb65976-xtables-lock\") pod \"kube-proxy-9ssqs\" (UID: \"43ee2bf8-9bb3-42e0-bbcc-73ca7bb65976\") " pod="kube-system/kube-proxy-9ssqs" Sep 12 00:17:31.905502 kubelet[2728]: I0912 00:17:31.905195 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccx5\" (UniqueName: \"kubernetes.io/projected/43ee2bf8-9bb3-42e0-bbcc-73ca7bb65976-kube-api-access-nccx5\") pod \"kube-proxy-9ssqs\" (UID: \"43ee2bf8-9bb3-42e0-bbcc-73ca7bb65976\") " pod="kube-system/kube-proxy-9ssqs" Sep 12 00:17:31.939737 systemd[1]: Created slice kubepods-besteffort-pod2ad00d8f_8243_424b_97af_07c0b4ed2c1f.slice - libcontainer container kubepods-besteffort-pod2ad00d8f_8243_424b_97af_07c0b4ed2c1f.slice. Sep 12 00:17:32.006021 kubelet[2728]: I0912 00:17:32.005953 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2ad00d8f-8243-424b-97af-07c0b4ed2c1f-var-lib-calico\") pod \"tigera-operator-755d956888-4dwwr\" (UID: \"2ad00d8f-8243-424b-97af-07c0b4ed2c1f\") " pod="tigera-operator/tigera-operator-755d956888-4dwwr" Sep 12 00:17:32.006021 kubelet[2728]: I0912 00:17:32.006019 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9h49\" (UniqueName: \"kubernetes.io/projected/2ad00d8f-8243-424b-97af-07c0b4ed2c1f-kube-api-access-k9h49\") pod \"tigera-operator-755d956888-4dwwr\" (UID: \"2ad00d8f-8243-424b-97af-07c0b4ed2c1f\") " pod="tigera-operator/tigera-operator-755d956888-4dwwr" Sep 12 00:17:32.194625 kubelet[2728]: E0912 00:17:32.194461 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:32.196556 containerd[1588]: time="2025-09-12T00:17:32.196502199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9ssqs,Uid:43ee2bf8-9bb3-42e0-bbcc-73ca7bb65976,Namespace:kube-system,Attempt:0,}" Sep 12 00:17:32.248131 containerd[1588]: time="2025-09-12T00:17:32.248037025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-4dwwr,Uid:2ad00d8f-8243-424b-97af-07c0b4ed2c1f,Namespace:tigera-operator,Attempt:0,}" Sep 12 00:17:33.166847 containerd[1588]: time="2025-09-12T00:17:33.166794393Z" level=info msg="connecting to shim 4bb7a97623247b932491519c7bb143bf003f6afafc4ac7a52fe343cde11127e6" address="unix:///run/containerd/s/9ca5ecd76dd98198852110e763ecd29d9373acde9eb3c859b3d3d45b591c335a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:17:33.167260 containerd[1588]: time="2025-09-12T00:17:33.166933634Z" level=info msg="connecting to shim ef4d63dc6f40a3747ba06e8bccc13b21a3ca7f05091bb271cf46e72d484f0ae3" address="unix:///run/containerd/s/075fdbc2eea77048a41bed787c83b3e4ce04b2d83c840862ae07deeff2cf8b3e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:17:33.193893 systemd[1]: Started cri-containerd-ef4d63dc6f40a3747ba06e8bccc13b21a3ca7f05091bb271cf46e72d484f0ae3.scope - libcontainer container ef4d63dc6f40a3747ba06e8bccc13b21a3ca7f05091bb271cf46e72d484f0ae3. Sep 12 00:17:33.197409 systemd[1]: Started cri-containerd-4bb7a97623247b932491519c7bb143bf003f6afafc4ac7a52fe343cde11127e6.scope - libcontainer container 4bb7a97623247b932491519c7bb143bf003f6afafc4ac7a52fe343cde11127e6. Sep 12 00:17:33.227802 containerd[1588]: time="2025-09-12T00:17:33.227678893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9ssqs,Uid:43ee2bf8-9bb3-42e0-bbcc-73ca7bb65976,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bb7a97623247b932491519c7bb143bf003f6afafc4ac7a52fe343cde11127e6\"" Sep 12 00:17:33.228578 kubelet[2728]: E0912 00:17:33.228543 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:33.231904 containerd[1588]: time="2025-09-12T00:17:33.231374576Z" level=info msg="CreateContainer within sandbox \"4bb7a97623247b932491519c7bb143bf003f6afafc4ac7a52fe343cde11127e6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 00:17:33.250417 containerd[1588]: time="2025-09-12T00:17:33.250335580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-4dwwr,Uid:2ad00d8f-8243-424b-97af-07c0b4ed2c1f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ef4d63dc6f40a3747ba06e8bccc13b21a3ca7f05091bb271cf46e72d484f0ae3\"" Sep 12 00:17:33.250577 containerd[1588]: time="2025-09-12T00:17:33.250513764Z" level=info msg="Container 8ff9bfbf3a8e57b7dc315210764459b4dfef9f7951f1351b484da9855bf71280: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:33.254084 containerd[1588]: time="2025-09-12T00:17:33.254050580Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 00:17:33.262501 containerd[1588]: time="2025-09-12T00:17:33.262381532Z" level=info msg="CreateContainer within sandbox \"4bb7a97623247b932491519c7bb143bf003f6afafc4ac7a52fe343cde11127e6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8ff9bfbf3a8e57b7dc315210764459b4dfef9f7951f1351b484da9855bf71280\"" Sep 12 00:17:33.263308 containerd[1588]: time="2025-09-12T00:17:33.263275812Z" level=info msg="StartContainer for \"8ff9bfbf3a8e57b7dc315210764459b4dfef9f7951f1351b484da9855bf71280\"" Sep 12 00:17:33.266161 containerd[1588]: time="2025-09-12T00:17:33.265850189Z" level=info msg="connecting to shim 8ff9bfbf3a8e57b7dc315210764459b4dfef9f7951f1351b484da9855bf71280" address="unix:///run/containerd/s/9ca5ecd76dd98198852110e763ecd29d9373acde9eb3c859b3d3d45b591c335a" protocol=ttrpc version=3 Sep 12 00:17:33.289874 systemd[1]: Started cri-containerd-8ff9bfbf3a8e57b7dc315210764459b4dfef9f7951f1351b484da9855bf71280.scope - libcontainer container 8ff9bfbf3a8e57b7dc315210764459b4dfef9f7951f1351b484da9855bf71280. Sep 12 00:17:33.335265 containerd[1588]: time="2025-09-12T00:17:33.335202267Z" level=info msg="StartContainer for \"8ff9bfbf3a8e57b7dc315210764459b4dfef9f7951f1351b484da9855bf71280\" returns successfully" Sep 12 00:17:33.428332 kubelet[2728]: E0912 00:17:33.428198 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:33.439628 kubelet[2728]: I0912 00:17:33.439547 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9ssqs" podStartSLOduration=2.439526871 podStartE2EDuration="2.439526871s" podCreationTimestamp="2025-09-12 00:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:17:33.439514198 +0000 UTC m=+5.143248167" watchObservedRunningTime="2025-09-12 00:17:33.439526871 +0000 UTC m=+5.143260840" Sep 12 00:17:33.791454 kubelet[2728]: E0912 00:17:33.791322 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:34.157115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104878993.mount: Deactivated successfully. Sep 12 00:17:34.431655 kubelet[2728]: E0912 00:17:34.431537 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:34.602050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885575669.mount: Deactivated successfully. Sep 12 00:17:34.941394 containerd[1588]: time="2025-09-12T00:17:34.941330589Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:34.942280 containerd[1588]: time="2025-09-12T00:17:34.942232704Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 00:17:34.943333 containerd[1588]: time="2025-09-12T00:17:34.943285322Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:34.945471 containerd[1588]: time="2025-09-12T00:17:34.945438507Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:34.946000 containerd[1588]: time="2025-09-12T00:17:34.945969574Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.69188513s" Sep 12 00:17:34.946035 containerd[1588]: time="2025-09-12T00:17:34.946000102Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 00:17:34.948734 containerd[1588]: time="2025-09-12T00:17:34.948025406Z" level=info msg="CreateContainer within sandbox \"ef4d63dc6f40a3747ba06e8bccc13b21a3ca7f05091bb271cf46e72d484f0ae3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 00:17:34.957176 containerd[1588]: time="2025-09-12T00:17:34.957138576Z" level=info msg="Container 52ebbb23c81f95166230cd00624f00846eae37368fcbe1a4f657e8a6e6c0995c: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:34.963826 containerd[1588]: time="2025-09-12T00:17:34.963778830Z" level=info msg="CreateContainer within sandbox \"ef4d63dc6f40a3747ba06e8bccc13b21a3ca7f05091bb271cf46e72d484f0ae3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"52ebbb23c81f95166230cd00624f00846eae37368fcbe1a4f657e8a6e6c0995c\"" Sep 12 00:17:34.964360 containerd[1588]: time="2025-09-12T00:17:34.964299439Z" level=info msg="StartContainer for \"52ebbb23c81f95166230cd00624f00846eae37368fcbe1a4f657e8a6e6c0995c\"" Sep 12 00:17:34.965177 containerd[1588]: time="2025-09-12T00:17:34.965134867Z" level=info msg="connecting to shim 52ebbb23c81f95166230cd00624f00846eae37368fcbe1a4f657e8a6e6c0995c" address="unix:///run/containerd/s/075fdbc2eea77048a41bed787c83b3e4ce04b2d83c840862ae07deeff2cf8b3e" protocol=ttrpc version=3 Sep 12 00:17:35.015865 systemd[1]: Started cri-containerd-52ebbb23c81f95166230cd00624f00846eae37368fcbe1a4f657e8a6e6c0995c.scope - libcontainer container 52ebbb23c81f95166230cd00624f00846eae37368fcbe1a4f657e8a6e6c0995c. Sep 12 00:17:35.049810 containerd[1588]: time="2025-09-12T00:17:35.049756076Z" level=info msg="StartContainer for \"52ebbb23c81f95166230cd00624f00846eae37368fcbe1a4f657e8a6e6c0995c\" returns successfully" Sep 12 00:17:35.447421 kubelet[2728]: I0912 00:17:35.447351 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-4dwwr" podStartSLOduration=2.753883175 podStartE2EDuration="4.447332234s" podCreationTimestamp="2025-09-12 00:17:31 +0000 UTC" firstStartedPulling="2025-09-12 00:17:33.253400638 +0000 UTC m=+4.957134607" lastFinishedPulling="2025-09-12 00:17:34.946849697 +0000 UTC m=+6.650583666" observedRunningTime="2025-09-12 00:17:35.447152796 +0000 UTC m=+7.150886765" watchObservedRunningTime="2025-09-12 00:17:35.447332234 +0000 UTC m=+7.151066193" Sep 12 00:17:35.865676 update_engine[1579]: I20250912 00:17:35.865597 1579 update_attempter.cc:509] Updating boot flags... Sep 12 00:17:37.490666 kubelet[2728]: E0912 00:17:37.490618 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:38.449744 kubelet[2728]: E0912 00:17:38.447635 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:39.225358 kubelet[2728]: E0912 00:17:39.225314 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:40.516659 sudo[1798]: pam_unix(sudo:session): session closed for user root Sep 12 00:17:40.518245 sshd[1797]: Connection closed by 10.0.0.1 port 60936 Sep 12 00:17:40.519289 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Sep 12 00:17:40.525355 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:60936.service: Deactivated successfully. Sep 12 00:17:40.534221 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 00:17:40.534488 systemd[1]: session-7.scope: Consumed 4.786s CPU time, 226.7M memory peak. Sep 12 00:17:40.535819 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Sep 12 00:17:40.541080 systemd-logind[1570]: Removed session 7. Sep 12 00:17:42.920183 systemd[1]: Created slice kubepods-besteffort-pode95d682b_1582_4e2c_8930_2481eb91a5f3.slice - libcontainer container kubepods-besteffort-pode95d682b_1582_4e2c_8930_2481eb91a5f3.slice. Sep 12 00:17:42.970930 kubelet[2728]: I0912 00:17:42.970861 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e95d682b-1582-4e2c-8930-2481eb91a5f3-typha-certs\") pod \"calico-typha-6f87dc89ff-dwsvg\" (UID: \"e95d682b-1582-4e2c-8930-2481eb91a5f3\") " pod="calico-system/calico-typha-6f87dc89ff-dwsvg" Sep 12 00:17:42.970930 kubelet[2728]: I0912 00:17:42.970935 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e95d682b-1582-4e2c-8930-2481eb91a5f3-tigera-ca-bundle\") pod \"calico-typha-6f87dc89ff-dwsvg\" (UID: \"e95d682b-1582-4e2c-8930-2481eb91a5f3\") " pod="calico-system/calico-typha-6f87dc89ff-dwsvg" Sep 12 00:17:42.971565 kubelet[2728]: I0912 00:17:42.970960 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w52mn\" (UniqueName: \"kubernetes.io/projected/e95d682b-1582-4e2c-8930-2481eb91a5f3-kube-api-access-w52mn\") pod \"calico-typha-6f87dc89ff-dwsvg\" (UID: \"e95d682b-1582-4e2c-8930-2481eb91a5f3\") " pod="calico-system/calico-typha-6f87dc89ff-dwsvg" Sep 12 00:17:43.226120 kubelet[2728]: E0912 00:17:43.225944 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:43.226522 containerd[1588]: time="2025-09-12T00:17:43.226479363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f87dc89ff-dwsvg,Uid:e95d682b-1582-4e2c-8930-2481eb91a5f3,Namespace:calico-system,Attempt:0,}" Sep 12 00:17:43.374092 kubelet[2728]: I0912 00:17:43.373995 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-flexvol-driver-host\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374092 kubelet[2728]: I0912 00:17:43.374076 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-var-lib-calico\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374092 kubelet[2728]: I0912 00:17:43.374097 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6px7\" (UniqueName: \"kubernetes.io/projected/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-kube-api-access-r6px7\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374480 kubelet[2728]: I0912 00:17:43.374118 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-policysync\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374480 kubelet[2728]: I0912 00:17:43.374138 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-node-certs\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374480 kubelet[2728]: I0912 00:17:43.374155 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-lib-modules\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374480 kubelet[2728]: I0912 00:17:43.374198 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-cni-bin-dir\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374480 kubelet[2728]: I0912 00:17:43.374236 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-var-run-calico\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374639 kubelet[2728]: I0912 00:17:43.374255 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-xtables-lock\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374639 kubelet[2728]: I0912 00:17:43.374282 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-tigera-ca-bundle\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374639 kubelet[2728]: I0912 00:17:43.374313 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-cni-log-dir\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.374639 kubelet[2728]: I0912 00:17:43.374331 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9-cni-net-dir\") pod \"calico-node-z4lt6\" (UID: \"e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9\") " pod="calico-system/calico-node-z4lt6" Sep 12 00:17:43.383239 systemd[1]: Created slice kubepods-besteffort-pode9c1ab5e_67c4_4f09_a2a4_f2df6c9579f9.slice - libcontainer container kubepods-besteffort-pode9c1ab5e_67c4_4f09_a2a4_f2df6c9579f9.slice. Sep 12 00:17:43.392707 containerd[1588]: time="2025-09-12T00:17:43.392640383Z" level=info msg="connecting to shim 92c5406ead7b95b0d3c3e3c245f456330f27ef59495f2e42f274cc62b4a6c153" address="unix:///run/containerd/s/e012354daccffec17739dbbab0070f84240e138019d1eb23d644a6ba01881a54" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:17:43.429077 systemd[1]: Started cri-containerd-92c5406ead7b95b0d3c3e3c245f456330f27ef59495f2e42f274cc62b4a6c153.scope - libcontainer container 92c5406ead7b95b0d3c3e3c245f456330f27ef59495f2e42f274cc62b4a6c153. Sep 12 00:17:43.476884 kubelet[2728]: E0912 00:17:43.476083 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.476884 kubelet[2728]: W0912 00:17:43.476117 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.476884 kubelet[2728]: E0912 00:17:43.476181 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.476884 kubelet[2728]: E0912 00:17:43.476524 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.476884 kubelet[2728]: W0912 00:17:43.476547 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.476884 kubelet[2728]: E0912 00:17:43.476561 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.477446 kubelet[2728]: E0912 00:17:43.477081 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.477446 kubelet[2728]: W0912 00:17:43.477093 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.477446 kubelet[2728]: E0912 00:17:43.477178 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.477908 kubelet[2728]: E0912 00:17:43.477891 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.478087 kubelet[2728]: W0912 00:17:43.478064 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.478157 kubelet[2728]: E0912 00:17:43.478095 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.478625 kubelet[2728]: E0912 00:17:43.478604 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.478625 kubelet[2728]: W0912 00:17:43.478621 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.479028 kubelet[2728]: E0912 00:17:43.479000 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.480254 kubelet[2728]: E0912 00:17:43.480231 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.480254 kubelet[2728]: W0912 00:17:43.480248 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.480548 kubelet[2728]: E0912 00:17:43.480474 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.481304 kubelet[2728]: E0912 00:17:43.480901 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.482432 kubelet[2728]: W0912 00:17:43.482391 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.484788 kubelet[2728]: E0912 00:17:43.483170 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.485284 kubelet[2728]: E0912 00:17:43.485110 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.485284 kubelet[2728]: W0912 00:17:43.485133 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.485284 kubelet[2728]: E0912 00:17:43.485251 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.487295 kubelet[2728]: E0912 00:17:43.486758 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.487295 kubelet[2728]: W0912 00:17:43.486776 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.487295 kubelet[2728]: E0912 00:17:43.486803 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.488155 kubelet[2728]: E0912 00:17:43.487490 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.488155 kubelet[2728]: W0912 00:17:43.487501 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.488155 kubelet[2728]: E0912 00:17:43.487514 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.496137 kubelet[2728]: E0912 00:17:43.496090 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.496137 kubelet[2728]: W0912 00:17:43.496114 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.496137 kubelet[2728]: E0912 00:17:43.496136 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.500623 containerd[1588]: time="2025-09-12T00:17:43.500590683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f87dc89ff-dwsvg,Uid:e95d682b-1582-4e2c-8930-2481eb91a5f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"92c5406ead7b95b0d3c3e3c245f456330f27ef59495f2e42f274cc62b4a6c153\"" Sep 12 00:17:43.501369 kubelet[2728]: E0912 00:17:43.501347 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:43.502452 containerd[1588]: time="2025-09-12T00:17:43.502393146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 00:17:43.535994 kubelet[2728]: E0912 00:17:43.534696 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n85pl" podUID="076fcf6e-1869-4af3-96de-c12eddd0a2fa" Sep 12 00:17:43.574965 kubelet[2728]: E0912 00:17:43.574907 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.574965 kubelet[2728]: W0912 00:17:43.574936 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.574965 kubelet[2728]: E0912 00:17:43.574977 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.575734 kubelet[2728]: E0912 00:17:43.575661 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.575734 kubelet[2728]: W0912 00:17:43.575681 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.575734 kubelet[2728]: E0912 00:17:43.575692 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.575949 kubelet[2728]: E0912 00:17:43.575929 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.575949 kubelet[2728]: W0912 00:17:43.575941 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.576102 kubelet[2728]: E0912 00:17:43.575949 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.576244 kubelet[2728]: E0912 00:17:43.576223 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.576244 kubelet[2728]: W0912 00:17:43.576237 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.576318 kubelet[2728]: E0912 00:17:43.576247 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.577503 kubelet[2728]: E0912 00:17:43.576758 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.577503 kubelet[2728]: W0912 00:17:43.576778 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.577503 kubelet[2728]: E0912 00:17:43.576791 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.577503 kubelet[2728]: E0912 00:17:43.577003 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.577503 kubelet[2728]: W0912 00:17:43.577012 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.577503 kubelet[2728]: E0912 00:17:43.577021 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.577503 kubelet[2728]: E0912 00:17:43.577200 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.577503 kubelet[2728]: W0912 00:17:43.577207 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.577503 kubelet[2728]: E0912 00:17:43.577214 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.577851 kubelet[2728]: E0912 00:17:43.577526 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.577851 kubelet[2728]: W0912 00:17:43.577536 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.577851 kubelet[2728]: E0912 00:17:43.577546 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.577851 kubelet[2728]: E0912 00:17:43.577766 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.577851 kubelet[2728]: W0912 00:17:43.577779 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.577851 kubelet[2728]: E0912 00:17:43.577788 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.578089 kubelet[2728]: E0912 00:17:43.578014 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.578089 kubelet[2728]: W0912 00:17:43.578023 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.578089 kubelet[2728]: E0912 00:17:43.578031 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.578353 kubelet[2728]: E0912 00:17:43.578325 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.578353 kubelet[2728]: W0912 00:17:43.578343 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.578353 kubelet[2728]: E0912 00:17:43.578353 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.578824 kubelet[2728]: E0912 00:17:43.578660 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.578824 kubelet[2728]: W0912 00:17:43.578692 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.578824 kubelet[2728]: E0912 00:17:43.578781 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.579193 kubelet[2728]: E0912 00:17:43.579168 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.579193 kubelet[2728]: W0912 00:17:43.579183 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.579325 kubelet[2728]: E0912 00:17:43.579196 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.579540 kubelet[2728]: E0912 00:17:43.579523 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.579540 kubelet[2728]: W0912 00:17:43.579537 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.579620 kubelet[2728]: E0912 00:17:43.579549 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.579808 kubelet[2728]: E0912 00:17:43.579791 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.579808 kubelet[2728]: W0912 00:17:43.579804 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.579898 kubelet[2728]: E0912 00:17:43.579815 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.580060 kubelet[2728]: E0912 00:17:43.580045 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.580060 kubelet[2728]: W0912 00:17:43.580057 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.580125 kubelet[2728]: E0912 00:17:43.580068 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.580337 kubelet[2728]: E0912 00:17:43.580321 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.580337 kubelet[2728]: W0912 00:17:43.580333 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.580424 kubelet[2728]: E0912 00:17:43.580345 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.580557 kubelet[2728]: E0912 00:17:43.580542 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.580557 kubelet[2728]: W0912 00:17:43.580555 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.580631 kubelet[2728]: E0912 00:17:43.580565 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.580868 kubelet[2728]: E0912 00:17:43.580847 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.580868 kubelet[2728]: W0912 00:17:43.580861 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.580962 kubelet[2728]: E0912 00:17:43.580872 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.581128 kubelet[2728]: E0912 00:17:43.581111 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.581128 kubelet[2728]: W0912 00:17:43.581125 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.581203 kubelet[2728]: E0912 00:17:43.581136 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.581482 kubelet[2728]: E0912 00:17:43.581463 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.581482 kubelet[2728]: W0912 00:17:43.581480 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.581564 kubelet[2728]: E0912 00:17:43.581493 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.581564 kubelet[2728]: I0912 00:17:43.581537 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/076fcf6e-1869-4af3-96de-c12eddd0a2fa-registration-dir\") pod \"csi-node-driver-n85pl\" (UID: \"076fcf6e-1869-4af3-96de-c12eddd0a2fa\") " pod="calico-system/csi-node-driver-n85pl" Sep 12 00:17:43.581836 kubelet[2728]: E0912 00:17:43.581818 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.581836 kubelet[2728]: W0912 00:17:43.581833 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.581936 kubelet[2728]: E0912 00:17:43.581864 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.581936 kubelet[2728]: I0912 00:17:43.581886 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g52lz\" (UniqueName: \"kubernetes.io/projected/076fcf6e-1869-4af3-96de-c12eddd0a2fa-kube-api-access-g52lz\") pod \"csi-node-driver-n85pl\" (UID: \"076fcf6e-1869-4af3-96de-c12eddd0a2fa\") " pod="calico-system/csi-node-driver-n85pl" Sep 12 00:17:43.582143 kubelet[2728]: E0912 00:17:43.582128 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.582143 kubelet[2728]: W0912 00:17:43.582139 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.582558 kubelet[2728]: E0912 00:17:43.582523 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.582558 kubelet[2728]: W0912 00:17:43.582538 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.582558 kubelet[2728]: E0912 00:17:43.582557 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.582825 kubelet[2728]: I0912 00:17:43.582608 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/076fcf6e-1869-4af3-96de-c12eddd0a2fa-kubelet-dir\") pod \"csi-node-driver-n85pl\" (UID: \"076fcf6e-1869-4af3-96de-c12eddd0a2fa\") " pod="calico-system/csi-node-driver-n85pl" Sep 12 00:17:43.582825 kubelet[2728]: E0912 00:17:43.582460 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.582908 kubelet[2728]: E0912 00:17:43.582888 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.582908 kubelet[2728]: W0912 00:17:43.582904 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.582978 kubelet[2728]: E0912 00:17:43.582927 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.582978 kubelet[2728]: I0912 00:17:43.582947 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/076fcf6e-1869-4af3-96de-c12eddd0a2fa-varrun\") pod \"csi-node-driver-n85pl\" (UID: \"076fcf6e-1869-4af3-96de-c12eddd0a2fa\") " pod="calico-system/csi-node-driver-n85pl" Sep 12 00:17:43.583202 kubelet[2728]: E0912 00:17:43.583182 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.583202 kubelet[2728]: W0912 00:17:43.583197 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.583259 kubelet[2728]: E0912 00:17:43.583211 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.583501 kubelet[2728]: E0912 00:17:43.583472 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.583501 kubelet[2728]: W0912 00:17:43.583489 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.583618 kubelet[2728]: E0912 00:17:43.583514 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.583838 kubelet[2728]: E0912 00:17:43.583805 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.583838 kubelet[2728]: W0912 00:17:43.583821 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.583912 kubelet[2728]: E0912 00:17:43.583856 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.584114 kubelet[2728]: E0912 00:17:43.584092 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.584114 kubelet[2728]: W0912 00:17:43.584108 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.584189 kubelet[2728]: E0912 00:17:43.584138 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.584424 kubelet[2728]: E0912 00:17:43.584400 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.584424 kubelet[2728]: W0912 00:17:43.584416 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.584505 kubelet[2728]: E0912 00:17:43.584462 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.584680 kubelet[2728]: E0912 00:17:43.584661 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.584680 kubelet[2728]: W0912 00:17:43.584677 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.584836 kubelet[2728]: E0912 00:17:43.584800 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.584925 kubelet[2728]: E0912 00:17:43.584908 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.584925 kubelet[2728]: W0912 00:17:43.584923 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.584985 kubelet[2728]: E0912 00:17:43.584949 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.584985 kubelet[2728]: I0912 00:17:43.584980 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/076fcf6e-1869-4af3-96de-c12eddd0a2fa-socket-dir\") pod \"csi-node-driver-n85pl\" (UID: \"076fcf6e-1869-4af3-96de-c12eddd0a2fa\") " pod="calico-system/csi-node-driver-n85pl" Sep 12 00:17:43.585242 kubelet[2728]: E0912 00:17:43.585219 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.585242 kubelet[2728]: W0912 00:17:43.585238 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.585306 kubelet[2728]: E0912 00:17:43.585252 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.585494 kubelet[2728]: E0912 00:17:43.585474 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.585494 kubelet[2728]: W0912 00:17:43.585490 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.585540 kubelet[2728]: E0912 00:17:43.585502 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.585780 kubelet[2728]: E0912 00:17:43.585685 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.585819 kubelet[2728]: W0912 00:17:43.585781 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.585819 kubelet[2728]: E0912 00:17:43.585795 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.686760 kubelet[2728]: E0912 00:17:43.686698 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.686760 kubelet[2728]: W0912 00:17:43.686747 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.686760 kubelet[2728]: E0912 00:17:43.686774 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.687091 kubelet[2728]: E0912 00:17:43.687064 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.687091 kubelet[2728]: W0912 00:17:43.687079 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.687144 kubelet[2728]: E0912 00:17:43.687092 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.687348 kubelet[2728]: E0912 00:17:43.687327 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.687348 kubelet[2728]: W0912 00:17:43.687344 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.687401 kubelet[2728]: E0912 00:17:43.687365 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.687589 kubelet[2728]: E0912 00:17:43.687572 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.687589 kubelet[2728]: W0912 00:17:43.687586 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.687636 kubelet[2728]: E0912 00:17:43.687600 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.687844 kubelet[2728]: E0912 00:17:43.687811 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.687844 kubelet[2728]: W0912 00:17:43.687828 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.687844 kubelet[2728]: E0912 00:17:43.687843 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.688112 kubelet[2728]: E0912 00:17:43.688075 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.688112 kubelet[2728]: W0912 00:17:43.688086 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.688112 kubelet[2728]: E0912 00:17:43.688102 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.688438 kubelet[2728]: E0912 00:17:43.688391 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.688438 kubelet[2728]: W0912 00:17:43.688423 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.688513 kubelet[2728]: E0912 00:17:43.688457 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.688677 kubelet[2728]: E0912 00:17:43.688652 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.688836 kubelet[2728]: W0912 00:17:43.688664 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.689595 kubelet[2728]: E0912 00:17:43.688993 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.689595 kubelet[2728]: E0912 00:17:43.689259 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.689595 kubelet[2728]: W0912 00:17:43.689271 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.689595 kubelet[2728]: E0912 00:17:43.689284 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.689754 containerd[1588]: time="2025-09-12T00:17:43.689663202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4lt6,Uid:e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9,Namespace:calico-system,Attempt:0,}" Sep 12 00:17:43.692517 kubelet[2728]: E0912 00:17:43.692417 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.692742 kubelet[2728]: W0912 00:17:43.692692 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.692828 kubelet[2728]: E0912 00:17:43.692814 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.693346 kubelet[2728]: E0912 00:17:43.693330 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.693412 kubelet[2728]: W0912 00:17:43.693400 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.693587 kubelet[2728]: E0912 00:17:43.693533 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.693838 kubelet[2728]: E0912 00:17:43.693824 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.693938 kubelet[2728]: W0912 00:17:43.693871 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.693938 kubelet[2728]: E0912 00:17:43.693912 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.694084 kubelet[2728]: E0912 00:17:43.694066 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.694084 kubelet[2728]: W0912 00:17:43.694076 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.694175 kubelet[2728]: E0912 00:17:43.694123 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.694651 kubelet[2728]: E0912 00:17:43.694525 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.694651 kubelet[2728]: W0912 00:17:43.694543 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.694651 kubelet[2728]: E0912 00:17:43.694575 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.694816 kubelet[2728]: E0912 00:17:43.694792 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.694816 kubelet[2728]: W0912 00:17:43.694811 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.694897 kubelet[2728]: E0912 00:17:43.694833 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.695088 kubelet[2728]: E0912 00:17:43.695069 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.695088 kubelet[2728]: W0912 00:17:43.695082 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.695162 kubelet[2728]: E0912 00:17:43.695097 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.695285 kubelet[2728]: E0912 00:17:43.695266 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.695285 kubelet[2728]: W0912 00:17:43.695280 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.695360 kubelet[2728]: E0912 00:17:43.695316 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.695470 kubelet[2728]: E0912 00:17:43.695455 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.695470 kubelet[2728]: W0912 00:17:43.695467 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.695515 kubelet[2728]: E0912 00:17:43.695490 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.695687 kubelet[2728]: E0912 00:17:43.695662 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.695687 kubelet[2728]: W0912 00:17:43.695674 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.695782 kubelet[2728]: E0912 00:17:43.695689 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.695924 kubelet[2728]: E0912 00:17:43.695911 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.695924 kubelet[2728]: W0912 00:17:43.695922 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.695983 kubelet[2728]: E0912 00:17:43.695937 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.696838 kubelet[2728]: E0912 00:17:43.696800 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.696838 kubelet[2728]: W0912 00:17:43.696818 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.696838 kubelet[2728]: E0912 00:17:43.696835 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.697074 kubelet[2728]: E0912 00:17:43.697058 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.697074 kubelet[2728]: W0912 00:17:43.697070 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.697140 kubelet[2728]: E0912 00:17:43.697087 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.697433 kubelet[2728]: E0912 00:17:43.697417 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.697433 kubelet[2728]: W0912 00:17:43.697430 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.697521 kubelet[2728]: E0912 00:17:43.697446 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.697863 kubelet[2728]: E0912 00:17:43.697831 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.697863 kubelet[2728]: W0912 00:17:43.697846 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.697863 kubelet[2728]: E0912 00:17:43.697864 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.698136 kubelet[2728]: E0912 00:17:43.698118 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.698136 kubelet[2728]: W0912 00:17:43.698133 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.698193 kubelet[2728]: E0912 00:17:43.698145 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.705894 kubelet[2728]: E0912 00:17:43.705855 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:43.705894 kubelet[2728]: W0912 00:17:43.705881 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:43.706020 kubelet[2728]: E0912 00:17:43.705905 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:43.722268 containerd[1588]: time="2025-09-12T00:17:43.722208695Z" level=info msg="connecting to shim 4873774f0f0031cea742e232c30184de1e344e2b34d1f2c46500b30e7e53b600" address="unix:///run/containerd/s/54197f650cdd861681bb6534edcda78c25987c2134f3069b6681eeac65068c29" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:17:43.758311 systemd[1]: Started cri-containerd-4873774f0f0031cea742e232c30184de1e344e2b34d1f2c46500b30e7e53b600.scope - libcontainer container 4873774f0f0031cea742e232c30184de1e344e2b34d1f2c46500b30e7e53b600. Sep 12 00:17:43.817034 containerd[1588]: time="2025-09-12T00:17:43.816975347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z4lt6,Uid:e9c1ab5e-67c4-4f09-a2a4-f2df6c9579f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"4873774f0f0031cea742e232c30184de1e344e2b34d1f2c46500b30e7e53b600\"" Sep 12 00:17:44.985611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203329964.mount: Deactivated successfully. Sep 12 00:17:45.408513 kubelet[2728]: E0912 00:17:45.408454 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n85pl" podUID="076fcf6e-1869-4af3-96de-c12eddd0a2fa" Sep 12 00:17:46.666565 containerd[1588]: time="2025-09-12T00:17:46.666504715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:46.667501 containerd[1588]: time="2025-09-12T00:17:46.667453155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 00:17:46.668704 containerd[1588]: time="2025-09-12T00:17:46.668675179Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:46.671567 containerd[1588]: time="2025-09-12T00:17:46.671531089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:46.672093 containerd[1588]: time="2025-09-12T00:17:46.672061354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.16962697s" Sep 12 00:17:46.672162 containerd[1588]: time="2025-09-12T00:17:46.672100958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 00:17:46.673284 containerd[1588]: time="2025-09-12T00:17:46.673257249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 00:17:46.684421 containerd[1588]: time="2025-09-12T00:17:46.684375906Z" level=info msg="CreateContainer within sandbox \"92c5406ead7b95b0d3c3e3c245f456330f27ef59495f2e42f274cc62b4a6c153\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 00:17:46.693504 containerd[1588]: time="2025-09-12T00:17:46.693451791Z" level=info msg="Container 7fede269648db8e8d8b02cfda2719afec418426e5e00218b584cf806265361f9: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:46.702364 containerd[1588]: time="2025-09-12T00:17:46.702317640Z" level=info msg="CreateContainer within sandbox \"92c5406ead7b95b0d3c3e3c245f456330f27ef59495f2e42f274cc62b4a6c153\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7fede269648db8e8d8b02cfda2719afec418426e5e00218b584cf806265361f9\"" Sep 12 00:17:46.702935 containerd[1588]: time="2025-09-12T00:17:46.702892439Z" level=info msg="StartContainer for \"7fede269648db8e8d8b02cfda2719afec418426e5e00218b584cf806265361f9\"" Sep 12 00:17:46.704192 containerd[1588]: time="2025-09-12T00:17:46.704160489Z" level=info msg="connecting to shim 7fede269648db8e8d8b02cfda2719afec418426e5e00218b584cf806265361f9" address="unix:///run/containerd/s/e012354daccffec17739dbbab0070f84240e138019d1eb23d644a6ba01881a54" protocol=ttrpc version=3 Sep 12 00:17:46.729901 systemd[1]: Started cri-containerd-7fede269648db8e8d8b02cfda2719afec418426e5e00218b584cf806265361f9.scope - libcontainer container 7fede269648db8e8d8b02cfda2719afec418426e5e00218b584cf806265361f9. Sep 12 00:17:46.785640 containerd[1588]: time="2025-09-12T00:17:46.785574160Z" level=info msg="StartContainer for \"7fede269648db8e8d8b02cfda2719afec418426e5e00218b584cf806265361f9\" returns successfully" Sep 12 00:17:47.408742 kubelet[2728]: E0912 00:17:47.408656 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n85pl" podUID="076fcf6e-1869-4af3-96de-c12eddd0a2fa" Sep 12 00:17:47.468501 kubelet[2728]: E0912 00:17:47.468438 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:47.505743 kubelet[2728]: E0912 00:17:47.505675 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.505743 kubelet[2728]: W0912 00:17:47.505707 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.505971 kubelet[2728]: E0912 00:17:47.505762 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.506007 kubelet[2728]: E0912 00:17:47.505995 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.506007 kubelet[2728]: W0912 00:17:47.506005 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.506062 kubelet[2728]: E0912 00:17:47.506024 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.506221 kubelet[2728]: E0912 00:17:47.506205 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.506221 kubelet[2728]: W0912 00:17:47.506216 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.506295 kubelet[2728]: E0912 00:17:47.506241 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.506501 kubelet[2728]: E0912 00:17:47.506476 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.506501 kubelet[2728]: W0912 00:17:47.506489 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.506501 kubelet[2728]: E0912 00:17:47.506499 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.506740 kubelet[2728]: E0912 00:17:47.506694 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.506740 kubelet[2728]: W0912 00:17:47.506709 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.506740 kubelet[2728]: E0912 00:17:47.506740 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.506942 kubelet[2728]: E0912 00:17:47.506927 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.506942 kubelet[2728]: W0912 00:17:47.506938 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.507002 kubelet[2728]: E0912 00:17:47.506948 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.507135 kubelet[2728]: E0912 00:17:47.507120 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.507135 kubelet[2728]: W0912 00:17:47.507132 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.507211 kubelet[2728]: E0912 00:17:47.507141 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.507337 kubelet[2728]: E0912 00:17:47.507322 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.507337 kubelet[2728]: W0912 00:17:47.507333 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.507398 kubelet[2728]: E0912 00:17:47.507343 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.507542 kubelet[2728]: E0912 00:17:47.507526 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.507542 kubelet[2728]: W0912 00:17:47.507538 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.507617 kubelet[2728]: E0912 00:17:47.507548 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.507753 kubelet[2728]: E0912 00:17:47.507735 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.507753 kubelet[2728]: W0912 00:17:47.507748 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.507881 kubelet[2728]: E0912 00:17:47.507758 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.507953 kubelet[2728]: E0912 00:17:47.507937 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.507953 kubelet[2728]: W0912 00:17:47.507950 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.508023 kubelet[2728]: E0912 00:17:47.507960 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.508170 kubelet[2728]: E0912 00:17:47.508151 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.508170 kubelet[2728]: W0912 00:17:47.508167 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.508233 kubelet[2728]: E0912 00:17:47.508180 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.508376 kubelet[2728]: E0912 00:17:47.508363 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.508462 kubelet[2728]: W0912 00:17:47.508374 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.508462 kubelet[2728]: E0912 00:17:47.508384 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.508566 kubelet[2728]: E0912 00:17:47.508553 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.508587 kubelet[2728]: W0912 00:17:47.508564 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.508587 kubelet[2728]: E0912 00:17:47.508573 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.508770 kubelet[2728]: E0912 00:17:47.508756 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.508814 kubelet[2728]: W0912 00:17:47.508776 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.508814 kubelet[2728]: E0912 00:17:47.508786 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.519328 kubelet[2728]: E0912 00:17:47.519271 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.519328 kubelet[2728]: W0912 00:17:47.519298 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.519328 kubelet[2728]: E0912 00:17:47.519323 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.519597 kubelet[2728]: E0912 00:17:47.519560 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.519597 kubelet[2728]: W0912 00:17:47.519580 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.519681 kubelet[2728]: E0912 00:17:47.519602 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.519906 kubelet[2728]: E0912 00:17:47.519880 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.519906 kubelet[2728]: W0912 00:17:47.519890 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.519906 kubelet[2728]: E0912 00:17:47.519907 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.520157 kubelet[2728]: E0912 00:17:47.520127 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.520157 kubelet[2728]: W0912 00:17:47.520142 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.520157 kubelet[2728]: E0912 00:17:47.520157 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.520344 kubelet[2728]: E0912 00:17:47.520317 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.520344 kubelet[2728]: W0912 00:17:47.520332 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.520344 kubelet[2728]: E0912 00:17:47.520345 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.520529 kubelet[2728]: E0912 00:17:47.520514 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.520529 kubelet[2728]: W0912 00:17:47.520525 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.520597 kubelet[2728]: E0912 00:17:47.520539 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.520705 kubelet[2728]: E0912 00:17:47.520687 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.520705 kubelet[2728]: W0912 00:17:47.520699 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.520799 kubelet[2728]: E0912 00:17:47.520728 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.520923 kubelet[2728]: E0912 00:17:47.520907 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.520923 kubelet[2728]: W0912 00:17:47.520918 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.521006 kubelet[2728]: E0912 00:17:47.520931 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.521183 kubelet[2728]: E0912 00:17:47.521151 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.521183 kubelet[2728]: W0912 00:17:47.521167 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.521183 kubelet[2728]: E0912 00:17:47.521183 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.521425 kubelet[2728]: E0912 00:17:47.521411 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.521425 kubelet[2728]: W0912 00:17:47.521420 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.521493 kubelet[2728]: E0912 00:17:47.521433 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.521634 kubelet[2728]: E0912 00:17:47.521610 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.521634 kubelet[2728]: W0912 00:17:47.521627 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.521732 kubelet[2728]: E0912 00:17:47.521642 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.521922 kubelet[2728]: E0912 00:17:47.521900 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.521922 kubelet[2728]: W0912 00:17:47.521914 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.521996 kubelet[2728]: E0912 00:17:47.521927 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.522127 kubelet[2728]: E0912 00:17:47.522111 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.522127 kubelet[2728]: W0912 00:17:47.522123 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.522194 kubelet[2728]: E0912 00:17:47.522141 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.522318 kubelet[2728]: E0912 00:17:47.522305 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.522318 kubelet[2728]: W0912 00:17:47.522313 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.522381 kubelet[2728]: E0912 00:17:47.522325 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.522532 kubelet[2728]: E0912 00:17:47.522517 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.522532 kubelet[2728]: W0912 00:17:47.522527 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.522611 kubelet[2728]: E0912 00:17:47.522539 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.522799 kubelet[2728]: E0912 00:17:47.522780 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.522799 kubelet[2728]: W0912 00:17:47.522792 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.522896 kubelet[2728]: E0912 00:17:47.522805 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.523016 kubelet[2728]: E0912 00:17:47.522998 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.523016 kubelet[2728]: W0912 00:17:47.523009 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.523082 kubelet[2728]: E0912 00:17:47.523022 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.523197 kubelet[2728]: E0912 00:17:47.523181 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 00:17:47.523197 kubelet[2728]: W0912 00:17:47.523191 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 00:17:47.523197 kubelet[2728]: E0912 00:17:47.523198 2728 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 00:17:47.604411 kubelet[2728]: I0912 00:17:47.604262 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f87dc89ff-dwsvg" podStartSLOduration=2.433198514 podStartE2EDuration="5.604206387s" podCreationTimestamp="2025-09-12 00:17:42 +0000 UTC" firstStartedPulling="2025-09-12 00:17:43.50207044 +0000 UTC m=+15.205804419" lastFinishedPulling="2025-09-12 00:17:46.673078313 +0000 UTC m=+18.376812292" observedRunningTime="2025-09-12 00:17:47.593190594 +0000 UTC m=+19.296924563" watchObservedRunningTime="2025-09-12 00:17:47.604206387 +0000 UTC m=+19.307940356" Sep 12 00:17:48.168491 containerd[1588]: time="2025-09-12T00:17:48.168422606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:48.169358 containerd[1588]: time="2025-09-12T00:17:48.169323586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 00:17:48.170562 containerd[1588]: time="2025-09-12T00:17:48.170521114Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:48.173013 containerd[1588]: time="2025-09-12T00:17:48.172952427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:48.173601 containerd[1588]: time="2025-09-12T00:17:48.173567501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.500281047s" Sep 12 00:17:48.173601 containerd[1588]: time="2025-09-12T00:17:48.173598920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 00:17:48.175668 containerd[1588]: time="2025-09-12T00:17:48.175633598Z" level=info msg="CreateContainer within sandbox \"4873774f0f0031cea742e232c30184de1e344e2b34d1f2c46500b30e7e53b600\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 00:17:48.186595 containerd[1588]: time="2025-09-12T00:17:48.186535317Z" level=info msg="Container 7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:48.203664 containerd[1588]: time="2025-09-12T00:17:48.203602516Z" level=info msg="CreateContainer within sandbox \"4873774f0f0031cea742e232c30184de1e344e2b34d1f2c46500b30e7e53b600\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68\"" Sep 12 00:17:48.204325 containerd[1588]: time="2025-09-12T00:17:48.204282242Z" level=info msg="StartContainer for \"7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68\"" Sep 12 00:17:48.205981 containerd[1588]: time="2025-09-12T00:17:48.205950082Z" level=info msg="connecting to shim 7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68" address="unix:///run/containerd/s/54197f650cdd861681bb6534edcda78c25987c2134f3069b6681eeac65068c29" protocol=ttrpc version=3 Sep 12 00:17:48.230896 systemd[1]: Started cri-containerd-7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68.scope - libcontainer container 7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68. Sep 12 00:17:48.286313 containerd[1588]: time="2025-09-12T00:17:48.286261710Z" level=info msg="StartContainer for \"7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68\" returns successfully" Sep 12 00:17:48.296683 systemd[1]: cri-containerd-7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68.scope: Deactivated successfully. Sep 12 00:17:48.298956 containerd[1588]: time="2025-09-12T00:17:48.298909025Z" level=info msg="received exit event container_id:\"7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68\" id:\"7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68\" pid:3437 exited_at:{seconds:1757636268 nanos:298472145}" Sep 12 00:17:48.300171 containerd[1588]: time="2025-09-12T00:17:48.299185764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68\" id:\"7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68\" pid:3437 exited_at:{seconds:1757636268 nanos:298472145}" Sep 12 00:17:48.328311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d446db3d3a689afd2b6239295c44a4268f4303218204504026292bf7aee1e68-rootfs.mount: Deactivated successfully. Sep 12 00:17:48.474111 kubelet[2728]: E0912 00:17:48.473973 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:49.408150 kubelet[2728]: E0912 00:17:49.408081 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n85pl" podUID="076fcf6e-1869-4af3-96de-c12eddd0a2fa" Sep 12 00:17:49.477634 kubelet[2728]: E0912 00:17:49.477593 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:49.478974 containerd[1588]: time="2025-09-12T00:17:49.478922693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 00:17:51.408486 kubelet[2728]: E0912 00:17:51.408408 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n85pl" podUID="076fcf6e-1869-4af3-96de-c12eddd0a2fa" Sep 12 00:17:53.323335 containerd[1588]: time="2025-09-12T00:17:53.323268469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:53.324566 containerd[1588]: time="2025-09-12T00:17:53.324461317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 00:17:53.326117 containerd[1588]: time="2025-09-12T00:17:53.326081467Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:53.329882 containerd[1588]: time="2025-09-12T00:17:53.329825892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:17:53.330378 containerd[1588]: time="2025-09-12T00:17:53.330338945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.851374454s" Sep 12 00:17:53.330378 containerd[1588]: time="2025-09-12T00:17:53.330373380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 00:17:53.333008 containerd[1588]: time="2025-09-12T00:17:53.332948471Z" level=info msg="CreateContainer within sandbox \"4873774f0f0031cea742e232c30184de1e344e2b34d1f2c46500b30e7e53b600\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 00:17:53.345049 containerd[1588]: time="2025-09-12T00:17:53.344977392Z" level=info msg="Container f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:17:53.355531 containerd[1588]: time="2025-09-12T00:17:53.355462084Z" level=info msg="CreateContainer within sandbox \"4873774f0f0031cea742e232c30184de1e344e2b34d1f2c46500b30e7e53b600\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123\"" Sep 12 00:17:53.356739 containerd[1588]: time="2025-09-12T00:17:53.356286660Z" level=info msg="StartContainer for \"f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123\"" Sep 12 00:17:53.357826 containerd[1588]: time="2025-09-12T00:17:53.357780394Z" level=info msg="connecting to shim f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123" address="unix:///run/containerd/s/54197f650cdd861681bb6534edcda78c25987c2134f3069b6681eeac65068c29" protocol=ttrpc version=3 Sep 12 00:17:53.383908 systemd[1]: Started cri-containerd-f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123.scope - libcontainer container f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123. Sep 12 00:17:53.408322 kubelet[2728]: E0912 00:17:53.408266 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n85pl" podUID="076fcf6e-1869-4af3-96de-c12eddd0a2fa" Sep 12 00:17:54.327097 containerd[1588]: time="2025-09-12T00:17:54.327049137Z" level=info msg="StartContainer for \"f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123\" returns successfully" Sep 12 00:17:55.408089 kubelet[2728]: E0912 00:17:55.408031 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n85pl" podUID="076fcf6e-1869-4af3-96de-c12eddd0a2fa" Sep 12 00:17:55.472232 containerd[1588]: time="2025-09-12T00:17:55.471367631Z" level=info msg="received exit event container_id:\"f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123\" id:\"f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123\" pid:3498 exited_at:{seconds:1757636275 nanos:471148119}" Sep 12 00:17:55.472232 containerd[1588]: time="2025-09-12T00:17:55.471617329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123\" id:\"f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123\" pid:3498 exited_at:{seconds:1757636275 nanos:471148119}" Sep 12 00:17:55.471436 systemd[1]: cri-containerd-f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123.scope: Deactivated successfully. Sep 12 00:17:55.471861 systemd[1]: cri-containerd-f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123.scope: Consumed 628ms CPU time, 175.7M memory peak, 4K read from disk, 171.3M written to disk. Sep 12 00:17:55.498653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8d9973a0f2627fb0c457d7adfb3dd18218d127f0976b10e694a796f38556123-rootfs.mount: Deactivated successfully. Sep 12 00:17:55.509795 kubelet[2728]: I0912 00:17:55.509765 2728 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 00:17:55.552746 systemd[1]: Created slice kubepods-burstable-podbcef082c_9a86_4d91_a4dd_8939318edd7b.slice - libcontainer container kubepods-burstable-podbcef082c_9a86_4d91_a4dd_8939318edd7b.slice. Sep 12 00:17:55.563652 systemd[1]: Created slice kubepods-besteffort-pod71bfd4f7_26e5_4df5_8cca_b940b8c12eb9.slice - libcontainer container kubepods-besteffort-pod71bfd4f7_26e5_4df5_8cca_b940b8c12eb9.slice. Sep 12 00:17:55.570165 systemd[1]: Created slice kubepods-besteffort-pod9786ace3_0bf8_4cca_8b2e_49b4a075c0b6.slice - libcontainer container kubepods-besteffort-pod9786ace3_0bf8_4cca_8b2e_49b4a075c0b6.slice. Sep 12 00:17:55.575432 systemd[1]: Created slice kubepods-burstable-pod8474d883_1171_4929_b6d0_2b8587f1cfa7.slice - libcontainer container kubepods-burstable-pod8474d883_1171_4929_b6d0_2b8587f1cfa7.slice. Sep 12 00:17:55.575551 kubelet[2728]: I0912 00:17:55.575471 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9wm6\" (UniqueName: \"kubernetes.io/projected/8474d883-1171-4929-b6d0-2b8587f1cfa7-kube-api-access-q9wm6\") pod \"coredns-668d6bf9bc-qlmhd\" (UID: \"8474d883-1171-4929-b6d0-2b8587f1cfa7\") " pod="kube-system/coredns-668d6bf9bc-qlmhd" Sep 12 00:17:55.575551 kubelet[2728]: I0912 00:17:55.575504 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f6080d34-27a9-469d-9690-545a2c483565-goldmane-key-pair\") pod \"goldmane-54d579b49d-v4pxb\" (UID: \"f6080d34-27a9-469d-9690-545a2c483565\") " pod="calico-system/goldmane-54d579b49d-v4pxb" Sep 12 00:17:55.575551 kubelet[2728]: I0912 00:17:55.575520 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11547b93-d91e-47e5-8b4b-3dfb6467b5be-whisker-backend-key-pair\") pod \"whisker-7b8465df64-lb84p\" (UID: \"11547b93-d91e-47e5-8b4b-3dfb6467b5be\") " pod="calico-system/whisker-7b8465df64-lb84p" Sep 12 00:17:55.575551 kubelet[2728]: I0912 00:17:55.575534 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11547b93-d91e-47e5-8b4b-3dfb6467b5be-whisker-ca-bundle\") pod \"whisker-7b8465df64-lb84p\" (UID: \"11547b93-d91e-47e5-8b4b-3dfb6467b5be\") " pod="calico-system/whisker-7b8465df64-lb84p" Sep 12 00:17:55.575551 kubelet[2728]: I0912 00:17:55.575552 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j5kp\" (UniqueName: \"kubernetes.io/projected/9786ace3-0bf8-4cca-8b2e-49b4a075c0b6-kube-api-access-5j5kp\") pod \"calico-kube-controllers-5c944b767b-95z2p\" (UID: \"9786ace3-0bf8-4cca-8b2e-49b4a075c0b6\") " pod="calico-system/calico-kube-controllers-5c944b767b-95z2p" Sep 12 00:17:55.575843 kubelet[2728]: I0912 00:17:55.575567 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkx6c\" (UniqueName: \"kubernetes.io/projected/71bfd4f7-26e5-4df5-8cca-b940b8c12eb9-kube-api-access-rkx6c\") pod \"calico-apiserver-77d8f65979-w66d2\" (UID: \"71bfd4f7-26e5-4df5-8cca-b940b8c12eb9\") " pod="calico-apiserver/calico-apiserver-77d8f65979-w66d2" Sep 12 00:17:55.575843 kubelet[2728]: I0912 00:17:55.575581 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d85adbdb-2840-4a43-9e8a-c70fcf484d58-calico-apiserver-certs\") pod \"calico-apiserver-77d8f65979-kwb72\" (UID: \"d85adbdb-2840-4a43-9e8a-c70fcf484d58\") " pod="calico-apiserver/calico-apiserver-77d8f65979-kwb72" Sep 12 00:17:55.575843 kubelet[2728]: I0912 00:17:55.575595 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8474d883-1171-4929-b6d0-2b8587f1cfa7-config-volume\") pod \"coredns-668d6bf9bc-qlmhd\" (UID: \"8474d883-1171-4929-b6d0-2b8587f1cfa7\") " pod="kube-system/coredns-668d6bf9bc-qlmhd" Sep 12 00:17:55.575843 kubelet[2728]: I0912 00:17:55.575609 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfwqw\" (UniqueName: \"kubernetes.io/projected/bcef082c-9a86-4d91-a4dd-8939318edd7b-kube-api-access-rfwqw\") pod \"coredns-668d6bf9bc-mlsc9\" (UID: \"bcef082c-9a86-4d91-a4dd-8939318edd7b\") " pod="kube-system/coredns-668d6bf9bc-mlsc9" Sep 12 00:17:55.575843 kubelet[2728]: I0912 00:17:55.575639 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/71bfd4f7-26e5-4df5-8cca-b940b8c12eb9-calico-apiserver-certs\") pod \"calico-apiserver-77d8f65979-w66d2\" (UID: \"71bfd4f7-26e5-4df5-8cca-b940b8c12eb9\") " pod="calico-apiserver/calico-apiserver-77d8f65979-w66d2" Sep 12 00:17:55.576063 kubelet[2728]: I0912 00:17:55.575675 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6080d34-27a9-469d-9690-545a2c483565-config\") pod \"goldmane-54d579b49d-v4pxb\" (UID: \"f6080d34-27a9-469d-9690-545a2c483565\") " pod="calico-system/goldmane-54d579b49d-v4pxb" Sep 12 00:17:55.576063 kubelet[2728]: I0912 00:17:55.575747 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9plq\" (UniqueName: \"kubernetes.io/projected/11547b93-d91e-47e5-8b4b-3dfb6467b5be-kube-api-access-f9plq\") pod \"whisker-7b8465df64-lb84p\" (UID: \"11547b93-d91e-47e5-8b4b-3dfb6467b5be\") " pod="calico-system/whisker-7b8465df64-lb84p" Sep 12 00:17:55.576063 kubelet[2728]: I0912 00:17:55.575773 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9786ace3-0bf8-4cca-8b2e-49b4a075c0b6-tigera-ca-bundle\") pod \"calico-kube-controllers-5c944b767b-95z2p\" (UID: \"9786ace3-0bf8-4cca-8b2e-49b4a075c0b6\") " pod="calico-system/calico-kube-controllers-5c944b767b-95z2p" Sep 12 00:17:55.576063 kubelet[2728]: I0912 00:17:55.575811 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6080d34-27a9-469d-9690-545a2c483565-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-v4pxb\" (UID: \"f6080d34-27a9-469d-9690-545a2c483565\") " pod="calico-system/goldmane-54d579b49d-v4pxb" Sep 12 00:17:55.576063 kubelet[2728]: I0912 00:17:55.575832 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcef082c-9a86-4d91-a4dd-8939318edd7b-config-volume\") pod \"coredns-668d6bf9bc-mlsc9\" (UID: \"bcef082c-9a86-4d91-a4dd-8939318edd7b\") " pod="kube-system/coredns-668d6bf9bc-mlsc9" Sep 12 00:17:55.576238 kubelet[2728]: I0912 00:17:55.575857 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-896zl\" (UniqueName: \"kubernetes.io/projected/f6080d34-27a9-469d-9690-545a2c483565-kube-api-access-896zl\") pod \"goldmane-54d579b49d-v4pxb\" (UID: \"f6080d34-27a9-469d-9690-545a2c483565\") " pod="calico-system/goldmane-54d579b49d-v4pxb" Sep 12 00:17:55.576238 kubelet[2728]: I0912 00:17:55.575875 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6sgm\" (UniqueName: \"kubernetes.io/projected/d85adbdb-2840-4a43-9e8a-c70fcf484d58-kube-api-access-n6sgm\") pod \"calico-apiserver-77d8f65979-kwb72\" (UID: \"d85adbdb-2840-4a43-9e8a-c70fcf484d58\") " pod="calico-apiserver/calico-apiserver-77d8f65979-kwb72" Sep 12 00:17:55.580590 systemd[1]: Created slice kubepods-besteffort-podd85adbdb_2840_4a43_9e8a_c70fcf484d58.slice - libcontainer container kubepods-besteffort-podd85adbdb_2840_4a43_9e8a_c70fcf484d58.slice. Sep 12 00:17:55.586916 systemd[1]: Created slice kubepods-besteffort-pod11547b93_d91e_47e5_8b4b_3dfb6467b5be.slice - libcontainer container kubepods-besteffort-pod11547b93_d91e_47e5_8b4b_3dfb6467b5be.slice. Sep 12 00:17:55.591081 systemd[1]: Created slice kubepods-besteffort-podf6080d34_27a9_469d_9690_545a2c483565.slice - libcontainer container kubepods-besteffort-podf6080d34_27a9_469d_9690_545a2c483565.slice. Sep 12 00:17:55.861098 kubelet[2728]: E0912 00:17:55.860941 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:55.861754 containerd[1588]: time="2025-09-12T00:17:55.861696722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mlsc9,Uid:bcef082c-9a86-4d91-a4dd-8939318edd7b,Namespace:kube-system,Attempt:0,}" Sep 12 00:17:55.867286 containerd[1588]: time="2025-09-12T00:17:55.867225385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d8f65979-w66d2,Uid:71bfd4f7-26e5-4df5-8cca-b940b8c12eb9,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:17:55.874805 containerd[1588]: time="2025-09-12T00:17:55.874684509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c944b767b-95z2p,Uid:9786ace3-0bf8-4cca-8b2e-49b4a075c0b6,Namespace:calico-system,Attempt:0,}" Sep 12 00:17:55.878915 kubelet[2728]: E0912 00:17:55.878872 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:17:55.879545 containerd[1588]: time="2025-09-12T00:17:55.879518028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qlmhd,Uid:8474d883-1171-4929-b6d0-2b8587f1cfa7,Namespace:kube-system,Attempt:0,}" Sep 12 00:17:55.888249 containerd[1588]: time="2025-09-12T00:17:55.888208563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d8f65979-kwb72,Uid:d85adbdb-2840-4a43-9e8a-c70fcf484d58,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:17:55.891063 containerd[1588]: time="2025-09-12T00:17:55.890904942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b8465df64-lb84p,Uid:11547b93-d91e-47e5-8b4b-3dfb6467b5be,Namespace:calico-system,Attempt:0,}" Sep 12 00:17:55.893991 containerd[1588]: time="2025-09-12T00:17:55.893969051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-v4pxb,Uid:f6080d34-27a9-469d-9690-545a2c483565,Namespace:calico-system,Attempt:0,}" Sep 12 00:17:56.024743 containerd[1588]: time="2025-09-12T00:17:56.024579920Z" level=error msg="Failed to destroy network for sandbox \"68e8679839c5bd37df39c9968515f352c27af69e3fd86e0085403952e839633d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.031010 containerd[1588]: time="2025-09-12T00:17:56.030946595Z" level=error msg="Failed to destroy network for sandbox \"2d5bd95696bc4174ee2a3e5c16a56c55dc3a6faec5548c38f08e5cdfe424194a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.039174 containerd[1588]: time="2025-09-12T00:17:56.039110181Z" level=error msg="Failed to destroy network for sandbox \"e8895292076e2b919d0bb94913962f3ad5a5f899684c201857e5eafbe647f918\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.059966 containerd[1588]: time="2025-09-12T00:17:56.059904514Z" level=error msg="Failed to destroy network for sandbox \"c2dda630cf20f8e1aef47ed5733c53307b1d77cdc7729c6f7b8c9a40bfe86d52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.062362 containerd[1588]: time="2025-09-12T00:17:56.062316299Z" level=error msg="Failed to destroy network for sandbox \"71ca3761d764115d8810e1ca4c3315c1345d344383635b79cff6a5fae37fc77f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.066245 containerd[1588]: time="2025-09-12T00:17:56.066182352Z" level=error msg="Failed to destroy network for sandbox \"d96602861d4a4bb9d340ad11a5f9d5bf9510836a205d33101e86bb9e9f122f15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.067898 containerd[1588]: time="2025-09-12T00:17:56.067871682Z" level=error msg="Failed to destroy network for sandbox \"40bfc907bff55c419a4ca6a23aee8c1c4aa9fdc89a4e5ec2ddf4dda23f4a8c92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.170754 containerd[1588]: time="2025-09-12T00:17:56.170679172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mlsc9,Uid:bcef082c-9a86-4d91-a4dd-8939318edd7b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e8679839c5bd37df39c9968515f352c27af69e3fd86e0085403952e839633d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.182204 kubelet[2728]: E0912 00:17:56.182159 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e8679839c5bd37df39c9968515f352c27af69e3fd86e0085403952e839633d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.182316 kubelet[2728]: E0912 00:17:56.182239 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e8679839c5bd37df39c9968515f352c27af69e3fd86e0085403952e839633d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mlsc9" Sep 12 00:17:56.182316 kubelet[2728]: E0912 00:17:56.182263 2728 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e8679839c5bd37df39c9968515f352c27af69e3fd86e0085403952e839633d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mlsc9" Sep 12 00:17:56.182389 kubelet[2728]: E0912 00:17:56.182307 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mlsc9_kube-system(bcef082c-9a86-4d91-a4dd-8939318edd7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mlsc9_kube-system(bcef082c-9a86-4d91-a4dd-8939318edd7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68e8679839c5bd37df39c9968515f352c27af69e3fd86e0085403952e839633d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mlsc9" podUID="bcef082c-9a86-4d91-a4dd-8939318edd7b" Sep 12 00:17:56.250388 containerd[1588]: time="2025-09-12T00:17:56.250319799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qlmhd,Uid:8474d883-1171-4929-b6d0-2b8587f1cfa7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5bd95696bc4174ee2a3e5c16a56c55dc3a6faec5548c38f08e5cdfe424194a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.250633 kubelet[2728]: E0912 00:17:56.250579 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5bd95696bc4174ee2a3e5c16a56c55dc3a6faec5548c38f08e5cdfe424194a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.250681 kubelet[2728]: E0912 00:17:56.250647 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5bd95696bc4174ee2a3e5c16a56c55dc3a6faec5548c38f08e5cdfe424194a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qlmhd" Sep 12 00:17:56.250744 kubelet[2728]: E0912 00:17:56.250668 2728 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5bd95696bc4174ee2a3e5c16a56c55dc3a6faec5548c38f08e5cdfe424194a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qlmhd" Sep 12 00:17:56.250778 kubelet[2728]: E0912 00:17:56.250742 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qlmhd_kube-system(8474d883-1171-4929-b6d0-2b8587f1cfa7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qlmhd_kube-system(8474d883-1171-4929-b6d0-2b8587f1cfa7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d5bd95696bc4174ee2a3e5c16a56c55dc3a6faec5548c38f08e5cdfe424194a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qlmhd" podUID="8474d883-1171-4929-b6d0-2b8587f1cfa7" Sep 12 00:17:56.266840 containerd[1588]: time="2025-09-12T00:17:56.266773689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d8f65979-w66d2,Uid:71bfd4f7-26e5-4df5-8cca-b940b8c12eb9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8895292076e2b919d0bb94913962f3ad5a5f899684c201857e5eafbe647f918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.267006 kubelet[2728]: E0912 00:17:56.266943 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8895292076e2b919d0bb94913962f3ad5a5f899684c201857e5eafbe647f918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.267006 kubelet[2728]: E0912 00:17:56.266979 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8895292076e2b919d0bb94913962f3ad5a5f899684c201857e5eafbe647f918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d8f65979-w66d2" Sep 12 00:17:56.267105 kubelet[2728]: E0912 00:17:56.266999 2728 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8895292076e2b919d0bb94913962f3ad5a5f899684c201857e5eafbe647f918\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d8f65979-w66d2" Sep 12 00:17:56.268724 kubelet[2728]: E0912 00:17:56.267082 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77d8f65979-w66d2_calico-apiserver(71bfd4f7-26e5-4df5-8cca-b940b8c12eb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77d8f65979-w66d2_calico-apiserver(71bfd4f7-26e5-4df5-8cca-b940b8c12eb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8895292076e2b919d0bb94913962f3ad5a5f899684c201857e5eafbe647f918\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d8f65979-w66d2" podUID="71bfd4f7-26e5-4df5-8cca-b940b8c12eb9" Sep 12 00:17:56.288668 containerd[1588]: time="2025-09-12T00:17:56.288592283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c944b767b-95z2p,Uid:9786ace3-0bf8-4cca-8b2e-49b4a075c0b6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2dda630cf20f8e1aef47ed5733c53307b1d77cdc7729c6f7b8c9a40bfe86d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.288989 kubelet[2728]: E0912 00:17:56.288929 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2dda630cf20f8e1aef47ed5733c53307b1d77cdc7729c6f7b8c9a40bfe86d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.289077 kubelet[2728]: E0912 00:17:56.289015 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2dda630cf20f8e1aef47ed5733c53307b1d77cdc7729c6f7b8c9a40bfe86d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c944b767b-95z2p" Sep 12 00:17:56.289077 kubelet[2728]: E0912 00:17:56.289041 2728 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2dda630cf20f8e1aef47ed5733c53307b1d77cdc7729c6f7b8c9a40bfe86d52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c944b767b-95z2p" Sep 12 00:17:56.289150 kubelet[2728]: E0912 00:17:56.289102 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c944b767b-95z2p_calico-system(9786ace3-0bf8-4cca-8b2e-49b4a075c0b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c944b767b-95z2p_calico-system(9786ace3-0bf8-4cca-8b2e-49b4a075c0b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2dda630cf20f8e1aef47ed5733c53307b1d77cdc7729c6f7b8c9a40bfe86d52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c944b767b-95z2p" podUID="9786ace3-0bf8-4cca-8b2e-49b4a075c0b6" Sep 12 00:17:56.334233 containerd[1588]: time="2025-09-12T00:17:56.334192735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b8465df64-lb84p,Uid:11547b93-d91e-47e5-8b4b-3dfb6467b5be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ca3761d764115d8810e1ca4c3315c1345d344383635b79cff6a5fae37fc77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.334499 kubelet[2728]: E0912 00:17:56.334450 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ca3761d764115d8810e1ca4c3315c1345d344383635b79cff6a5fae37fc77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.334657 kubelet[2728]: E0912 00:17:56.334496 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ca3761d764115d8810e1ca4c3315c1345d344383635b79cff6a5fae37fc77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b8465df64-lb84p" Sep 12 00:17:56.334657 kubelet[2728]: E0912 00:17:56.334519 2728 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71ca3761d764115d8810e1ca4c3315c1345d344383635b79cff6a5fae37fc77f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b8465df64-lb84p" Sep 12 00:17:56.334657 kubelet[2728]: E0912 00:17:56.334570 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b8465df64-lb84p_calico-system(11547b93-d91e-47e5-8b4b-3dfb6467b5be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b8465df64-lb84p_calico-system(11547b93-d91e-47e5-8b4b-3dfb6467b5be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71ca3761d764115d8810e1ca4c3315c1345d344383635b79cff6a5fae37fc77f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b8465df64-lb84p" podUID="11547b93-d91e-47e5-8b4b-3dfb6467b5be" Sep 12 00:17:56.336818 containerd[1588]: time="2025-09-12T00:17:56.336793154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 00:17:56.369525 containerd[1588]: time="2025-09-12T00:17:56.369443712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d8f65979-kwb72,Uid:d85adbdb-2840-4a43-9e8a-c70fcf484d58,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96602861d4a4bb9d340ad11a5f9d5bf9510836a205d33101e86bb9e9f122f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.370001 kubelet[2728]: E0912 00:17:56.369944 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96602861d4a4bb9d340ad11a5f9d5bf9510836a205d33101e86bb9e9f122f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.370001 kubelet[2728]: E0912 00:17:56.370025 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96602861d4a4bb9d340ad11a5f9d5bf9510836a205d33101e86bb9e9f122f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d8f65979-kwb72" Sep 12 00:17:56.370252 kubelet[2728]: E0912 00:17:56.370051 2728 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d96602861d4a4bb9d340ad11a5f9d5bf9510836a205d33101e86bb9e9f122f15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77d8f65979-kwb72" Sep 12 00:17:56.370252 kubelet[2728]: E0912 00:17:56.370199 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77d8f65979-kwb72_calico-apiserver(d85adbdb-2840-4a43-9e8a-c70fcf484d58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77d8f65979-kwb72_calico-apiserver(d85adbdb-2840-4a43-9e8a-c70fcf484d58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d96602861d4a4bb9d340ad11a5f9d5bf9510836a205d33101e86bb9e9f122f15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77d8f65979-kwb72" podUID="d85adbdb-2840-4a43-9e8a-c70fcf484d58" Sep 12 00:17:56.473802 containerd[1588]: time="2025-09-12T00:17:56.473589683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-v4pxb,Uid:f6080d34-27a9-469d-9690-545a2c483565,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40bfc907bff55c419a4ca6a23aee8c1c4aa9fdc89a4e5ec2ddf4dda23f4a8c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.474268 kubelet[2728]: E0912 00:17:56.473875 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40bfc907bff55c419a4ca6a23aee8c1c4aa9fdc89a4e5ec2ddf4dda23f4a8c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:56.474268 kubelet[2728]: E0912 00:17:56.473953 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40bfc907bff55c419a4ca6a23aee8c1c4aa9fdc89a4e5ec2ddf4dda23f4a8c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-v4pxb" Sep 12 00:17:56.474268 kubelet[2728]: E0912 00:17:56.473973 2728 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40bfc907bff55c419a4ca6a23aee8c1c4aa9fdc89a4e5ec2ddf4dda23f4a8c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-v4pxb" Sep 12 00:17:56.474587 kubelet[2728]: E0912 00:17:56.474028 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-v4pxb_calico-system(f6080d34-27a9-469d-9690-545a2c483565)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-v4pxb_calico-system(f6080d34-27a9-469d-9690-545a2c483565)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40bfc907bff55c419a4ca6a23aee8c1c4aa9fdc89a4e5ec2ddf4dda23f4a8c92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-v4pxb" podUID="f6080d34-27a9-469d-9690-545a2c483565" Sep 12 00:17:56.499099 systemd[1]: run-netns-cni\x2d5707bc4c\x2d4543\x2d3d62\x2de371\x2dd0c115184640.mount: Deactivated successfully. Sep 12 00:17:57.414787 systemd[1]: Created slice kubepods-besteffort-pod076fcf6e_1869_4af3_96de_c12eddd0a2fa.slice - libcontainer container kubepods-besteffort-pod076fcf6e_1869_4af3_96de_c12eddd0a2fa.slice. Sep 12 00:17:57.417690 containerd[1588]: time="2025-09-12T00:17:57.417642485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n85pl,Uid:076fcf6e-1869-4af3-96de-c12eddd0a2fa,Namespace:calico-system,Attempt:0,}" Sep 12 00:17:57.476821 containerd[1588]: time="2025-09-12T00:17:57.476747251Z" level=error msg="Failed to destroy network for sandbox \"ef5883c22dcc8eaba7fc3f188c0dfff75662ff86050cec6d17158dba48c59d7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:57.478654 containerd[1588]: time="2025-09-12T00:17:57.478563859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n85pl,Uid:076fcf6e-1869-4af3-96de-c12eddd0a2fa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5883c22dcc8eaba7fc3f188c0dfff75662ff86050cec6d17158dba48c59d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:57.478978 kubelet[2728]: E0912 00:17:57.478919 2728 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5883c22dcc8eaba7fc3f188c0dfff75662ff86050cec6d17158dba48c59d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 00:17:57.480375 kubelet[2728]: E0912 00:17:57.479011 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5883c22dcc8eaba7fc3f188c0dfff75662ff86050cec6d17158dba48c59d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n85pl" Sep 12 00:17:57.480375 kubelet[2728]: E0912 00:17:57.479035 2728 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef5883c22dcc8eaba7fc3f188c0dfff75662ff86050cec6d17158dba48c59d7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n85pl" Sep 12 00:17:57.480375 kubelet[2728]: E0912 00:17:57.479093 2728 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n85pl_calico-system(076fcf6e-1869-4af3-96de-c12eddd0a2fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n85pl_calico-system(076fcf6e-1869-4af3-96de-c12eddd0a2fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef5883c22dcc8eaba7fc3f188c0dfff75662ff86050cec6d17158dba48c59d7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n85pl" podUID="076fcf6e-1869-4af3-96de-c12eddd0a2fa" Sep 12 00:17:57.479143 systemd[1]: run-netns-cni\x2de83171c7\x2d436f\x2dc949\x2d160b\x2dc0e7f05e6b99.mount: Deactivated successfully. Sep 12 00:18:05.319198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268088704.mount: Deactivated successfully. Sep 12 00:18:06.111029 containerd[1588]: time="2025-09-12T00:18:06.110969576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:06.112142 containerd[1588]: time="2025-09-12T00:18:06.112105857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 00:18:06.113735 containerd[1588]: time="2025-09-12T00:18:06.113505633Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:06.115623 containerd[1588]: time="2025-09-12T00:18:06.115575907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:06.116112 containerd[1588]: time="2025-09-12T00:18:06.116074402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.779254165s" Sep 12 00:18:06.116112 containerd[1588]: time="2025-09-12T00:18:06.116105159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 00:18:06.126379 containerd[1588]: time="2025-09-12T00:18:06.126330820Z" level=info msg="CreateContainer within sandbox \"4873774f0f0031cea742e232c30184de1e344e2b34d1f2c46500b30e7e53b600\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 00:18:06.149800 containerd[1588]: time="2025-09-12T00:18:06.149746443Z" level=info msg="Container 35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:06.172897 containerd[1588]: time="2025-09-12T00:18:06.172841032Z" level=info msg="CreateContainer within sandbox \"4873774f0f0031cea742e232c30184de1e344e2b34d1f2c46500b30e7e53b600\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\"" Sep 12 00:18:06.173536 containerd[1588]: time="2025-09-12T00:18:06.173485992Z" level=info msg="StartContainer for \"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\"" Sep 12 00:18:06.175039 containerd[1588]: time="2025-09-12T00:18:06.175012896Z" level=info msg="connecting to shim 35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174" address="unix:///run/containerd/s/54197f650cdd861681bb6534edcda78c25987c2134f3069b6681eeac65068c29" protocol=ttrpc version=3 Sep 12 00:18:06.202885 systemd[1]: Started cri-containerd-35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174.scope - libcontainer container 35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174. Sep 12 00:18:06.256536 containerd[1588]: time="2025-09-12T00:18:06.256460593Z" level=info msg="StartContainer for \"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" returns successfully" Sep 12 00:18:06.282048 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:56140.service - OpenSSH per-connection server daemon (10.0.0.1:56140). Sep 12 00:18:06.334671 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 00:18:06.334819 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 00:18:06.343657 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 56140 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:06.345389 sshd-session[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:06.351133 systemd-logind[1570]: New session 8 of user core. Sep 12 00:18:06.356046 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 00:18:06.380977 kubelet[2728]: I0912 00:18:06.380900 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z4lt6" podStartSLOduration=1.082404323 podStartE2EDuration="23.380879343s" podCreationTimestamp="2025-09-12 00:17:43 +0000 UTC" firstStartedPulling="2025-09-12 00:17:43.818424248 +0000 UTC m=+15.522158207" lastFinishedPulling="2025-09-12 00:18:06.116899258 +0000 UTC m=+37.820633227" observedRunningTime="2025-09-12 00:18:06.377183792 +0000 UTC m=+38.080917761" watchObservedRunningTime="2025-09-12 00:18:06.380879343 +0000 UTC m=+38.084613312" Sep 12 00:18:06.499939 containerd[1588]: time="2025-09-12T00:18:06.499884940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" id:\"e4d68fa3c7206901f5db8a04740e4307b85e4554d43b4749c9f3ffea03770812\" pid:3880 exit_status:1 exited_at:{seconds:1757636286 nanos:499140484}" Sep 12 00:18:06.542174 sshd[3867]: Connection closed by 10.0.0.1 port 56140 Sep 12 00:18:06.543882 sshd-session[3856]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:06.548669 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Sep 12 00:18:06.549490 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:56140.service: Deactivated successfully. Sep 12 00:18:06.550190 kubelet[2728]: I0912 00:18:06.550150 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11547b93-d91e-47e5-8b4b-3dfb6467b5be-whisker-ca-bundle\") pod \"11547b93-d91e-47e5-8b4b-3dfb6467b5be\" (UID: \"11547b93-d91e-47e5-8b4b-3dfb6467b5be\") " Sep 12 00:18:06.550254 kubelet[2728]: I0912 00:18:06.550221 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9plq\" (UniqueName: \"kubernetes.io/projected/11547b93-d91e-47e5-8b4b-3dfb6467b5be-kube-api-access-f9plq\") pod \"11547b93-d91e-47e5-8b4b-3dfb6467b5be\" (UID: \"11547b93-d91e-47e5-8b4b-3dfb6467b5be\") " Sep 12 00:18:06.550254 kubelet[2728]: I0912 00:18:06.550252 2728 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11547b93-d91e-47e5-8b4b-3dfb6467b5be-whisker-backend-key-pair\") pod \"11547b93-d91e-47e5-8b4b-3dfb6467b5be\" (UID: \"11547b93-d91e-47e5-8b4b-3dfb6467b5be\") " Sep 12 00:18:06.551645 kubelet[2728]: I0912 00:18:06.551081 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11547b93-d91e-47e5-8b4b-3dfb6467b5be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "11547b93-d91e-47e5-8b4b-3dfb6467b5be" (UID: "11547b93-d91e-47e5-8b4b-3dfb6467b5be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 00:18:06.554202 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 00:18:06.555828 systemd-logind[1570]: Removed session 8. Sep 12 00:18:06.558660 systemd[1]: var-lib-kubelet-pods-11547b93\x2dd91e\x2d47e5\x2d8b4b\x2d3dfb6467b5be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df9plq.mount: Deactivated successfully. Sep 12 00:18:06.558878 systemd[1]: var-lib-kubelet-pods-11547b93\x2dd91e\x2d47e5\x2d8b4b\x2d3dfb6467b5be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 00:18:06.563246 kubelet[2728]: I0912 00:18:06.563107 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11547b93-d91e-47e5-8b4b-3dfb6467b5be-kube-api-access-f9plq" (OuterVolumeSpecName: "kube-api-access-f9plq") pod "11547b93-d91e-47e5-8b4b-3dfb6467b5be" (UID: "11547b93-d91e-47e5-8b4b-3dfb6467b5be"). InnerVolumeSpecName "kube-api-access-f9plq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 00:18:06.563246 kubelet[2728]: I0912 00:18:06.563222 2728 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11547b93-d91e-47e5-8b4b-3dfb6467b5be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "11547b93-d91e-47e5-8b4b-3dfb6467b5be" (UID: "11547b93-d91e-47e5-8b4b-3dfb6467b5be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 00:18:06.651055 kubelet[2728]: I0912 00:18:06.650905 2728 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11547b93-d91e-47e5-8b4b-3dfb6467b5be-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 00:18:06.651055 kubelet[2728]: I0912 00:18:06.650938 2728 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f9plq\" (UniqueName: \"kubernetes.io/projected/11547b93-d91e-47e5-8b4b-3dfb6467b5be-kube-api-access-f9plq\") on node \"localhost\" DevicePath \"\"" Sep 12 00:18:06.651055 kubelet[2728]: I0912 00:18:06.650949 2728 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/11547b93-d91e-47e5-8b4b-3dfb6467b5be-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 00:18:07.371516 systemd[1]: Removed slice kubepods-besteffort-pod11547b93_d91e_47e5_8b4b_3dfb6467b5be.slice - libcontainer container kubepods-besteffort-pod11547b93_d91e_47e5_8b4b_3dfb6467b5be.slice. Sep 12 00:18:07.409348 kubelet[2728]: E0912 00:18:07.409304 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:07.411151 containerd[1588]: time="2025-09-12T00:18:07.411096258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mlsc9,Uid:bcef082c-9a86-4d91-a4dd-8939318edd7b,Namespace:kube-system,Attempt:0,}" Sep 12 00:18:07.434415 systemd[1]: Created slice kubepods-besteffort-poda52aa91f_68eb_458c_97e2_1ee666641eeb.slice - libcontainer container kubepods-besteffort-poda52aa91f_68eb_458c_97e2_1ee666641eeb.slice. Sep 12 00:18:07.455902 kubelet[2728]: I0912 00:18:07.455847 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blbgs\" (UniqueName: \"kubernetes.io/projected/a52aa91f-68eb-458c-97e2-1ee666641eeb-kube-api-access-blbgs\") pod \"whisker-7bfb5fcc5f-lcllm\" (UID: \"a52aa91f-68eb-458c-97e2-1ee666641eeb\") " pod="calico-system/whisker-7bfb5fcc5f-lcllm" Sep 12 00:18:07.457733 kubelet[2728]: I0912 00:18:07.457229 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a52aa91f-68eb-458c-97e2-1ee666641eeb-whisker-backend-key-pair\") pod \"whisker-7bfb5fcc5f-lcllm\" (UID: \"a52aa91f-68eb-458c-97e2-1ee666641eeb\") " pod="calico-system/whisker-7bfb5fcc5f-lcllm" Sep 12 00:18:07.457733 kubelet[2728]: I0912 00:18:07.457277 2728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a52aa91f-68eb-458c-97e2-1ee666641eeb-whisker-ca-bundle\") pod \"whisker-7bfb5fcc5f-lcllm\" (UID: \"a52aa91f-68eb-458c-97e2-1ee666641eeb\") " pod="calico-system/whisker-7bfb5fcc5f-lcllm" Sep 12 00:18:07.479417 containerd[1588]: time="2025-09-12T00:18:07.479365273Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" id:\"d00aecbe0846b3cb7bab7a754be6961842fae4737e94c394ea687e41eee9288a\" pid:3938 exit_status:1 exited_at:{seconds:1757636287 nanos:479054069}" Sep 12 00:18:07.569156 systemd-networkd[1478]: cali36ac8c00a42: Link UP Sep 12 00:18:07.569922 systemd-networkd[1478]: cali36ac8c00a42: Gained carrier Sep 12 00:18:07.585122 containerd[1588]: 2025-09-12 00:18:07.451 [INFO][3949] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 00:18:07.585122 containerd[1588]: 2025-09-12 00:18:07.467 [INFO][3949] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0 coredns-668d6bf9bc- kube-system bcef082c-9a86-4d91-a4dd-8939318edd7b 861 0 2025-09-12 00:17:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mlsc9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali36ac8c00a42 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Namespace="kube-system" Pod="coredns-668d6bf9bc-mlsc9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mlsc9-" Sep 12 00:18:07.585122 containerd[1588]: 2025-09-12 00:18:07.467 [INFO][3949] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Namespace="kube-system" Pod="coredns-668d6bf9bc-mlsc9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0" Sep 12 00:18:07.585122 containerd[1588]: 2025-09-12 00:18:07.526 [INFO][3965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" HandleID="k8s-pod-network.4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Workload="localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0" Sep 12 00:18:07.585423 containerd[1588]: 2025-09-12 00:18:07.527 [INFO][3965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" HandleID="k8s-pod-network.4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Workload="localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f370), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mlsc9", "timestamp":"2025-09-12 00:18:07.526537666 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:07.585423 containerd[1588]: 2025-09-12 00:18:07.527 [INFO][3965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:07.585423 containerd[1588]: 2025-09-12 00:18:07.527 [INFO][3965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:07.585423 containerd[1588]: 2025-09-12 00:18:07.527 [INFO][3965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:07.585423 containerd[1588]: 2025-09-12 00:18:07.535 [INFO][3965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" host="localhost" Sep 12 00:18:07.585423 containerd[1588]: 2025-09-12 00:18:07.540 [INFO][3965] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:07.585423 containerd[1588]: 2025-09-12 00:18:07.543 [INFO][3965] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:07.585423 containerd[1588]: 2025-09-12 00:18:07.545 [INFO][3965] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:07.585423 containerd[1588]: 2025-09-12 00:18:07.546 [INFO][3965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:07.585423 containerd[1588]: 2025-09-12 00:18:07.546 [INFO][3965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" host="localhost" Sep 12 00:18:07.585726 containerd[1588]: 2025-09-12 00:18:07.547 [INFO][3965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a Sep 12 00:18:07.585726 containerd[1588]: 2025-09-12 00:18:07.550 [INFO][3965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" host="localhost" Sep 12 00:18:07.585726 containerd[1588]: 2025-09-12 00:18:07.554 [INFO][3965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" host="localhost" Sep 12 00:18:07.585726 containerd[1588]: 2025-09-12 00:18:07.554 [INFO][3965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" host="localhost" Sep 12 00:18:07.585726 containerd[1588]: 2025-09-12 00:18:07.554 [INFO][3965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:07.585726 containerd[1588]: 2025-09-12 00:18:07.554 [INFO][3965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" HandleID="k8s-pod-network.4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Workload="localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0" Sep 12 00:18:07.585912 containerd[1588]: 2025-09-12 00:18:07.558 [INFO][3949] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Namespace="kube-system" Pod="coredns-668d6bf9bc-mlsc9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bcef082c-9a86-4d91-a4dd-8939318edd7b", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mlsc9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36ac8c00a42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:07.586003 containerd[1588]: 2025-09-12 00:18:07.558 [INFO][3949] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Namespace="kube-system" Pod="coredns-668d6bf9bc-mlsc9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0" Sep 12 00:18:07.586003 containerd[1588]: 2025-09-12 00:18:07.558 [INFO][3949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36ac8c00a42 ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Namespace="kube-system" Pod="coredns-668d6bf9bc-mlsc9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0" Sep 12 00:18:07.586003 containerd[1588]: 2025-09-12 00:18:07.570 [INFO][3949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Namespace="kube-system" Pod="coredns-668d6bf9bc-mlsc9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0" Sep 12 00:18:07.586092 containerd[1588]: 2025-09-12 00:18:07.571 [INFO][3949] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Namespace="kube-system" Pod="coredns-668d6bf9bc-mlsc9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bcef082c-9a86-4d91-a4dd-8939318edd7b", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a", Pod:"coredns-668d6bf9bc-mlsc9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36ac8c00a42", MAC:"7a:17:de:d6:c5:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:07.586092 containerd[1588]: 2025-09-12 00:18:07.579 [INFO][3949] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" Namespace="kube-system" Pod="coredns-668d6bf9bc-mlsc9" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mlsc9-eth0" Sep 12 00:18:07.622781 containerd[1588]: time="2025-09-12T00:18:07.622604111Z" level=info msg="connecting to shim 4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a" address="unix:///run/containerd/s/86660693bdb47f82b043e3b3d9c0a77d249bd40545acf09fbb5ada97a1683855" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:07.644901 systemd[1]: Started cri-containerd-4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a.scope - libcontainer container 4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a. Sep 12 00:18:07.657806 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:07.687635 containerd[1588]: time="2025-09-12T00:18:07.687585329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mlsc9,Uid:bcef082c-9a86-4d91-a4dd-8939318edd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a\"" Sep 12 00:18:07.688302 kubelet[2728]: E0912 00:18:07.688260 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:07.691008 containerd[1588]: time="2025-09-12T00:18:07.690982401Z" level=info msg="CreateContainer within sandbox \"4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 00:18:07.719793 containerd[1588]: time="2025-09-12T00:18:07.719545017Z" level=info msg="Container f19e2ff884dce1fe0760670589267b0c089bc013658b2e1165d0ee113030d275: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:07.733316 containerd[1588]: time="2025-09-12T00:18:07.733255325Z" level=info msg="CreateContainer within sandbox \"4654c1c3b6d67868b385f483dd34a3a412ae533b7af77bd9904e3da9b276f92a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f19e2ff884dce1fe0760670589267b0c089bc013658b2e1165d0ee113030d275\"" Sep 12 00:18:07.734566 containerd[1588]: time="2025-09-12T00:18:07.734508906Z" level=info msg="StartContainer for \"f19e2ff884dce1fe0760670589267b0c089bc013658b2e1165d0ee113030d275\"" Sep 12 00:18:07.735600 containerd[1588]: time="2025-09-12T00:18:07.735563444Z" level=info msg="connecting to shim f19e2ff884dce1fe0760670589267b0c089bc013658b2e1165d0ee113030d275" address="unix:///run/containerd/s/86660693bdb47f82b043e3b3d9c0a77d249bd40545acf09fbb5ada97a1683855" protocol=ttrpc version=3 Sep 12 00:18:07.739826 containerd[1588]: time="2025-09-12T00:18:07.739502282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bfb5fcc5f-lcllm,Uid:a52aa91f-68eb-458c-97e2-1ee666641eeb,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:07.776908 systemd[1]: Started cri-containerd-f19e2ff884dce1fe0760670589267b0c089bc013658b2e1165d0ee113030d275.scope - libcontainer container f19e2ff884dce1fe0760670589267b0c089bc013658b2e1165d0ee113030d275. Sep 12 00:18:07.963007 containerd[1588]: time="2025-09-12T00:18:07.962194324Z" level=info msg="StartContainer for \"f19e2ff884dce1fe0760670589267b0c089bc013658b2e1165d0ee113030d275\" returns successfully" Sep 12 00:18:08.037205 systemd-networkd[1478]: cali015f86c58be: Link UP Sep 12 00:18:08.038223 systemd-networkd[1478]: cali015f86c58be: Gained carrier Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:07.827 [INFO][4102] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:07.844 [INFO][4102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0 whisker-7bfb5fcc5f- calico-system a52aa91f-68eb-458c-97e2-1ee666641eeb 987 0 2025-09-12 00:18:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bfb5fcc5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7bfb5fcc5f-lcllm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali015f86c58be [] [] }} ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Namespace="calico-system" Pod="whisker-7bfb5fcc5f-lcllm" WorkloadEndpoint="localhost-k8s-whisker--7bfb5fcc5f--lcllm-" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:07.845 [INFO][4102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Namespace="calico-system" Pod="whisker-7bfb5fcc5f-lcllm" WorkloadEndpoint="localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:07.979 [INFO][4163] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" HandleID="k8s-pod-network.d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Workload="localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:07.979 [INFO][4163] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" HandleID="k8s-pod-network.d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Workload="localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7bfb5fcc5f-lcllm", "timestamp":"2025-09-12 00:18:07.979336569 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:07.979 [INFO][4163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:07.980 [INFO][4163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:07.980 [INFO][4163] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:07.987 [INFO][4163] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" host="localhost" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.006 [INFO][4163] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.010 [INFO][4163] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.012 [INFO][4163] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.014 [INFO][4163] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.014 [INFO][4163] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" host="localhost" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.015 [INFO][4163] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425 Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.019 [INFO][4163] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" host="localhost" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.026 [INFO][4163] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" host="localhost" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.026 [INFO][4163] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" host="localhost" Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.026 [INFO][4163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:08.054448 containerd[1588]: 2025-09-12 00:18:08.026 [INFO][4163] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" HandleID="k8s-pod-network.d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Workload="localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0" Sep 12 00:18:08.055053 containerd[1588]: 2025-09-12 00:18:08.033 [INFO][4102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Namespace="calico-system" Pod="whisker-7bfb5fcc5f-lcllm" WorkloadEndpoint="localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0", GenerateName:"whisker-7bfb5fcc5f-", Namespace:"calico-system", SelfLink:"", UID:"a52aa91f-68eb-458c-97e2-1ee666641eeb", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bfb5fcc5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7bfb5fcc5f-lcllm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali015f86c58be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:08.055053 containerd[1588]: 2025-09-12 00:18:08.033 [INFO][4102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Namespace="calico-system" Pod="whisker-7bfb5fcc5f-lcllm" WorkloadEndpoint="localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0" Sep 12 00:18:08.055053 containerd[1588]: 2025-09-12 00:18:08.033 [INFO][4102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali015f86c58be ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Namespace="calico-system" Pod="whisker-7bfb5fcc5f-lcllm" WorkloadEndpoint="localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0" Sep 12 00:18:08.055053 containerd[1588]: 2025-09-12 00:18:08.039 [INFO][4102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Namespace="calico-system" Pod="whisker-7bfb5fcc5f-lcllm" WorkloadEndpoint="localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0" Sep 12 00:18:08.055053 containerd[1588]: 2025-09-12 00:18:08.039 [INFO][4102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Namespace="calico-system" Pod="whisker-7bfb5fcc5f-lcllm" WorkloadEndpoint="localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0", GenerateName:"whisker-7bfb5fcc5f-", Namespace:"calico-system", SelfLink:"", UID:"a52aa91f-68eb-458c-97e2-1ee666641eeb", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bfb5fcc5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425", Pod:"whisker-7bfb5fcc5f-lcllm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali015f86c58be", MAC:"9a:11:1f:42:be:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:08.055053 containerd[1588]: 2025-09-12 00:18:08.049 [INFO][4102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" Namespace="calico-system" Pod="whisker-7bfb5fcc5f-lcllm" WorkloadEndpoint="localhost-k8s-whisker--7bfb5fcc5f--lcllm-eth0" Sep 12 00:18:08.090336 containerd[1588]: time="2025-09-12T00:18:08.090253849Z" level=info msg="connecting to shim d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425" address="unix:///run/containerd/s/e6ca6d7af1ccf119f2dcc8f5b49156bb1c05cbe8fcf072a2d55acc0a30b86d67" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:08.166890 systemd[1]: Started cri-containerd-d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425.scope - libcontainer container d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425. Sep 12 00:18:08.181735 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:08.213395 containerd[1588]: time="2025-09-12T00:18:08.213252950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bfb5fcc5f-lcllm,Uid:a52aa91f-68eb-458c-97e2-1ee666641eeb,Namespace:calico-system,Attempt:0,} returns sandbox id \"d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425\"" Sep 12 00:18:08.217012 containerd[1588]: time="2025-09-12T00:18:08.215899314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 00:18:08.344181 systemd-networkd[1478]: vxlan.calico: Link UP Sep 12 00:18:08.344195 systemd-networkd[1478]: vxlan.calico: Gained carrier Sep 12 00:18:08.390575 kubelet[2728]: E0912 00:18:08.390536 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:08.409747 kubelet[2728]: E0912 00:18:08.408619 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:08.413066 containerd[1588]: time="2025-09-12T00:18:08.410842703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qlmhd,Uid:8474d883-1171-4929-b6d0-2b8587f1cfa7,Namespace:kube-system,Attempt:0,}" Sep 12 00:18:08.413939 containerd[1588]: time="2025-09-12T00:18:08.413801502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d8f65979-w66d2,Uid:71bfd4f7-26e5-4df5-8cca-b940b8c12eb9,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:18:08.433186 kubelet[2728]: I0912 00:18:08.428698 2728 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11547b93-d91e-47e5-8b4b-3dfb6467b5be" path="/var/lib/kubelet/pods/11547b93-d91e-47e5-8b4b-3dfb6467b5be/volumes" Sep 12 00:18:08.444345 kubelet[2728]: I0912 00:18:08.444268 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mlsc9" podStartSLOduration=37.444236599999996 podStartE2EDuration="37.4442366s" podCreationTimestamp="2025-09-12 00:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:18:08.416744553 +0000 UTC m=+40.120478632" watchObservedRunningTime="2025-09-12 00:18:08.4442366 +0000 UTC m=+40.147970579" Sep 12 00:18:08.556522 containerd[1588]: time="2025-09-12T00:18:08.556387062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" id:\"dfc166bf75bdd9adfa9ab0b30814def52e27b8de590f1111b9ba0bf62db4d06a\" pid:4322 exit_status:1 exited_at:{seconds:1757636288 nanos:554180403}" Sep 12 00:18:08.605673 systemd-networkd[1478]: calie652a8872d0: Link UP Sep 12 00:18:08.606790 systemd-networkd[1478]: calie652a8872d0: Gained carrier Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.504 [INFO][4316] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0 coredns-668d6bf9bc- kube-system 8474d883-1171-4929-b6d0-2b8587f1cfa7 870 0 2025-09-12 00:17:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-qlmhd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie652a8872d0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Namespace="kube-system" Pod="coredns-668d6bf9bc-qlmhd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qlmhd-" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.504 [INFO][4316] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Namespace="kube-system" Pod="coredns-668d6bf9bc-qlmhd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.552 [INFO][4368] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" HandleID="k8s-pod-network.8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Workload="localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.552 [INFO][4368] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" HandleID="k8s-pod-network.8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Workload="localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324140), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-qlmhd", "timestamp":"2025-09-12 00:18:08.552309033 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.552 [INFO][4368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.552 [INFO][4368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.552 [INFO][4368] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.560 [INFO][4368] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" host="localhost" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.564 [INFO][4368] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.569 [INFO][4368] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.571 [INFO][4368] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.575 [INFO][4368] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.575 [INFO][4368] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" host="localhost" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.577 [INFO][4368] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66 Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.583 [INFO][4368] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" host="localhost" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.588 [INFO][4368] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" host="localhost" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.588 [INFO][4368] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" host="localhost" Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.588 [INFO][4368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:08.618350 containerd[1588]: 2025-09-12 00:18:08.588 [INFO][4368] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" HandleID="k8s-pod-network.8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Workload="localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0" Sep 12 00:18:08.619040 containerd[1588]: 2025-09-12 00:18:08.599 [INFO][4316] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Namespace="kube-system" Pod="coredns-668d6bf9bc-qlmhd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8474d883-1171-4929-b6d0-2b8587f1cfa7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-qlmhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie652a8872d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:08.619040 containerd[1588]: 2025-09-12 00:18:08.602 [INFO][4316] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Namespace="kube-system" Pod="coredns-668d6bf9bc-qlmhd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0" Sep 12 00:18:08.619040 containerd[1588]: 2025-09-12 00:18:08.602 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie652a8872d0 ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Namespace="kube-system" Pod="coredns-668d6bf9bc-qlmhd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0" Sep 12 00:18:08.619040 containerd[1588]: 2025-09-12 00:18:08.605 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Namespace="kube-system" Pod="coredns-668d6bf9bc-qlmhd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0" Sep 12 00:18:08.619040 containerd[1588]: 2025-09-12 00:18:08.605 [INFO][4316] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Namespace="kube-system" Pod="coredns-668d6bf9bc-qlmhd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8474d883-1171-4929-b6d0-2b8587f1cfa7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66", Pod:"coredns-668d6bf9bc-qlmhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie652a8872d0", MAC:"66:39:37:68:4b:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:08.619040 containerd[1588]: 2025-09-12 00:18:08.615 [INFO][4316] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" Namespace="kube-system" Pod="coredns-668d6bf9bc-qlmhd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qlmhd-eth0" Sep 12 00:18:08.647654 containerd[1588]: time="2025-09-12T00:18:08.647602647Z" level=info msg="connecting to shim 8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66" address="unix:///run/containerd/s/0f4e793b6f3ac60335a6415f9ea1b53d3c29f889dc444eb23d7c0e16b1100dc9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:08.682871 systemd[1]: Started cri-containerd-8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66.scope - libcontainer container 8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66. Sep 12 00:18:08.700187 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:08.706337 systemd-networkd[1478]: cali73b6153ac14: Link UP Sep 12 00:18:08.707626 systemd-networkd[1478]: cali73b6153ac14: Gained carrier Sep 12 00:18:08.731098 systemd-networkd[1478]: cali36ac8c00a42: Gained IPv6LL Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.535 [INFO][4323] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0 calico-apiserver-77d8f65979- calico-apiserver 71bfd4f7-26e5-4df5-8cca-b940b8c12eb9 864 0 2025-09-12 00:17:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77d8f65979 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77d8f65979-w66d2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali73b6153ac14 [] [] }} ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-w66d2" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--w66d2-" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.535 [INFO][4323] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-w66d2" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.593 [INFO][4376] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" HandleID="k8s-pod-network.87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Workload="localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.593 [INFO][4376] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" HandleID="k8s-pod-network.87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Workload="localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f240), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77d8f65979-w66d2", "timestamp":"2025-09-12 00:18:08.593318926 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.593 [INFO][4376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.593 [INFO][4376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.593 [INFO][4376] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.661 [INFO][4376] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" host="localhost" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.667 [INFO][4376] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.673 [INFO][4376] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.675 [INFO][4376] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.678 [INFO][4376] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.678 [INFO][4376] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" host="localhost" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.679 [INFO][4376] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527 Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.683 [INFO][4376] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" host="localhost" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.689 [INFO][4376] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" host="localhost" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.689 [INFO][4376] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" host="localhost" Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.689 [INFO][4376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:08.738133 containerd[1588]: 2025-09-12 00:18:08.689 [INFO][4376] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" HandleID="k8s-pod-network.87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Workload="localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0" Sep 12 00:18:08.738879 containerd[1588]: 2025-09-12 00:18:08.693 [INFO][4323] cni-plugin/k8s.go 418: Populated endpoint ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-w66d2" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0", GenerateName:"calico-apiserver-77d8f65979-", Namespace:"calico-apiserver", SelfLink:"", UID:"71bfd4f7-26e5-4df5-8cca-b940b8c12eb9", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d8f65979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77d8f65979-w66d2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73b6153ac14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:08.738879 containerd[1588]: 2025-09-12 00:18:08.694 [INFO][4323] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-w66d2" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0" Sep 12 00:18:08.738879 containerd[1588]: 2025-09-12 00:18:08.694 [INFO][4323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73b6153ac14 ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-w66d2" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0" Sep 12 00:18:08.738879 containerd[1588]: 2025-09-12 00:18:08.708 [INFO][4323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-w66d2" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0" Sep 12 00:18:08.738879 containerd[1588]: 2025-09-12 00:18:08.709 [INFO][4323] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-w66d2" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0", GenerateName:"calico-apiserver-77d8f65979-", Namespace:"calico-apiserver", SelfLink:"", UID:"71bfd4f7-26e5-4df5-8cca-b940b8c12eb9", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d8f65979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527", Pod:"calico-apiserver-77d8f65979-w66d2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali73b6153ac14", MAC:"72:13:bd:ae:33:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:08.738879 containerd[1588]: 2025-09-12 00:18:08.732 [INFO][4323] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-w66d2" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--w66d2-eth0" Sep 12 00:18:08.743757 containerd[1588]: time="2025-09-12T00:18:08.743730155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qlmhd,Uid:8474d883-1171-4929-b6d0-2b8587f1cfa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66\"" Sep 12 00:18:08.745067 kubelet[2728]: E0912 00:18:08.745048 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:08.748584 containerd[1588]: time="2025-09-12T00:18:08.748547480Z" level=info msg="CreateContainer within sandbox \"8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 00:18:08.777768 containerd[1588]: time="2025-09-12T00:18:08.777470783Z" level=info msg="Container baf03ea565bb0bbb66d3104da08897d2508a2b39f74ba565e229cf65073b6551: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:08.786553 containerd[1588]: time="2025-09-12T00:18:08.786505950Z" level=info msg="CreateContainer within sandbox \"8e18e9e972fd22652807e59049d6c6c94e411de6496fbeb4c339c89d5ca92e66\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baf03ea565bb0bbb66d3104da08897d2508a2b39f74ba565e229cf65073b6551\"" Sep 12 00:18:08.794947 containerd[1588]: time="2025-09-12T00:18:08.794910705Z" level=info msg="StartContainer for \"baf03ea565bb0bbb66d3104da08897d2508a2b39f74ba565e229cf65073b6551\"" Sep 12 00:18:08.795980 containerd[1588]: time="2025-09-12T00:18:08.795941278Z" level=info msg="connecting to shim baf03ea565bb0bbb66d3104da08897d2508a2b39f74ba565e229cf65073b6551" address="unix:///run/containerd/s/0f4e793b6f3ac60335a6415f9ea1b53d3c29f889dc444eb23d7c0e16b1100dc9" protocol=ttrpc version=3 Sep 12 00:18:08.796804 containerd[1588]: time="2025-09-12T00:18:08.796283140Z" level=info msg="connecting to shim 87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527" address="unix:///run/containerd/s/564a6a70a278c5fe9eff2f18c0838a2549cd77cce1f23bd14fcbf53fbabf0b2b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:08.820860 systemd[1]: Started cri-containerd-baf03ea565bb0bbb66d3104da08897d2508a2b39f74ba565e229cf65073b6551.scope - libcontainer container baf03ea565bb0bbb66d3104da08897d2508a2b39f74ba565e229cf65073b6551. Sep 12 00:18:08.834909 systemd[1]: Started cri-containerd-87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527.scope - libcontainer container 87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527. Sep 12 00:18:08.856107 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:08.869089 containerd[1588]: time="2025-09-12T00:18:08.868995537Z" level=info msg="StartContainer for \"baf03ea565bb0bbb66d3104da08897d2508a2b39f74ba565e229cf65073b6551\" returns successfully" Sep 12 00:18:08.901931 containerd[1588]: time="2025-09-12T00:18:08.901894747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d8f65979-w66d2,Uid:71bfd4f7-26e5-4df5-8cca-b940b8c12eb9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527\"" Sep 12 00:18:09.408423 containerd[1588]: time="2025-09-12T00:18:09.408137084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d8f65979-kwb72,Uid:d85adbdb-2840-4a43-9e8a-c70fcf484d58,Namespace:calico-apiserver,Attempt:0,}" Sep 12 00:18:09.408669 kubelet[2728]: E0912 00:18:09.408455 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:09.408669 kubelet[2728]: E0912 00:18:09.408531 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:09.443172 kubelet[2728]: I0912 00:18:09.443102 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qlmhd" podStartSLOduration=38.443081018 podStartE2EDuration="38.443081018s" podCreationTimestamp="2025-09-12 00:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 00:18:09.426493275 +0000 UTC m=+41.130227244" watchObservedRunningTime="2025-09-12 00:18:09.443081018 +0000 UTC m=+41.146814977" Sep 12 00:18:09.527015 systemd-networkd[1478]: cali8dd0f0b0ba6: Link UP Sep 12 00:18:09.528588 systemd-networkd[1478]: cali8dd0f0b0ba6: Gained carrier Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.460 [INFO][4567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0 calico-apiserver-77d8f65979- calico-apiserver d85adbdb-2840-4a43-9e8a-c70fcf484d58 868 0 2025-09-12 00:17:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77d8f65979 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-77d8f65979-kwb72 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8dd0f0b0ba6 [] [] }} ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-kwb72" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--kwb72-" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.460 [INFO][4567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-kwb72" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.488 [INFO][4584] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" HandleID="k8s-pod-network.637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Workload="localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.489 [INFO][4584] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" HandleID="k8s-pod-network.637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Workload="localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001355b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-77d8f65979-kwb72", "timestamp":"2025-09-12 00:18:09.488976955 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.489 [INFO][4584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.489 [INFO][4584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.489 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.496 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" host="localhost" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.501 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.505 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.507 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.509 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.509 [INFO][4584] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" host="localhost" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.510 [INFO][4584] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887 Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.514 [INFO][4584] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" host="localhost" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.521 [INFO][4584] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" host="localhost" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.521 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" host="localhost" Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.521 [INFO][4584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:09.545776 containerd[1588]: 2025-09-12 00:18:09.521 [INFO][4584] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" HandleID="k8s-pod-network.637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Workload="localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0" Sep 12 00:18:09.546613 containerd[1588]: 2025-09-12 00:18:09.524 [INFO][4567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-kwb72" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0", GenerateName:"calico-apiserver-77d8f65979-", Namespace:"calico-apiserver", SelfLink:"", UID:"d85adbdb-2840-4a43-9e8a-c70fcf484d58", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d8f65979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-77d8f65979-kwb72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8dd0f0b0ba6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:09.546613 containerd[1588]: 2025-09-12 00:18:09.524 [INFO][4567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-kwb72" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0" Sep 12 00:18:09.546613 containerd[1588]: 2025-09-12 00:18:09.524 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8dd0f0b0ba6 ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-kwb72" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0" Sep 12 00:18:09.546613 containerd[1588]: 2025-09-12 00:18:09.528 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-kwb72" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0" Sep 12 00:18:09.546613 containerd[1588]: 2025-09-12 00:18:09.528 [INFO][4567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-kwb72" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0", GenerateName:"calico-apiserver-77d8f65979-", Namespace:"calico-apiserver", SelfLink:"", UID:"d85adbdb-2840-4a43-9e8a-c70fcf484d58", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77d8f65979", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887", Pod:"calico-apiserver-77d8f65979-kwb72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8dd0f0b0ba6", MAC:"42:02:b1:59:de:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:09.546613 containerd[1588]: 2025-09-12 00:18:09.538 [INFO][4567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" Namespace="calico-apiserver" Pod="calico-apiserver-77d8f65979-kwb72" WorkloadEndpoint="localhost-k8s-calico--apiserver--77d8f65979--kwb72-eth0" Sep 12 00:18:09.576700 containerd[1588]: time="2025-09-12T00:18:09.576543994Z" level=info msg="connecting to shim 637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887" address="unix:///run/containerd/s/226339c3952c6d61ef3a95b314a2bbc80a9dc3f597844d80794f8bd84ec6774a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:09.605011 systemd[1]: Started cri-containerd-637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887.scope - libcontainer container 637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887. Sep 12 00:18:09.619587 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:09.626929 systemd-networkd[1478]: cali015f86c58be: Gained IPv6LL Sep 12 00:18:09.628633 systemd-networkd[1478]: vxlan.calico: Gained IPv6LL Sep 12 00:18:09.655333 containerd[1588]: time="2025-09-12T00:18:09.655271202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77d8f65979-kwb72,Uid:d85adbdb-2840-4a43-9e8a-c70fcf484d58,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887\"" Sep 12 00:18:09.959403 containerd[1588]: time="2025-09-12T00:18:09.959340272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:09.959982 containerd[1588]: time="2025-09-12T00:18:09.959946971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 00:18:09.961054 containerd[1588]: time="2025-09-12T00:18:09.961028720Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:09.963077 containerd[1588]: time="2025-09-12T00:18:09.963043509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:09.963632 containerd[1588]: time="2025-09-12T00:18:09.963600373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.746591268s" Sep 12 00:18:09.963632 containerd[1588]: time="2025-09-12T00:18:09.963629107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 00:18:09.964591 containerd[1588]: time="2025-09-12T00:18:09.964362032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 00:18:09.965691 containerd[1588]: time="2025-09-12T00:18:09.965660707Z" level=info msg="CreateContainer within sandbox \"d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 00:18:09.973153 containerd[1588]: time="2025-09-12T00:18:09.973118356Z" level=info msg="Container 821a327e2584f9c71bdd1d6c10c5f94a7c1c194c3df8e156adda176c30a6bbbe: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:09.981076 containerd[1588]: time="2025-09-12T00:18:09.981045886Z" level=info msg="CreateContainer within sandbox \"d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"821a327e2584f9c71bdd1d6c10c5f94a7c1c194c3df8e156adda176c30a6bbbe\"" Sep 12 00:18:09.981620 containerd[1588]: time="2025-09-12T00:18:09.981552927Z" level=info msg="StartContainer for \"821a327e2584f9c71bdd1d6c10c5f94a7c1c194c3df8e156adda176c30a6bbbe\"" Sep 12 00:18:09.982632 containerd[1588]: time="2025-09-12T00:18:09.982606773Z" level=info msg="connecting to shim 821a327e2584f9c71bdd1d6c10c5f94a7c1c194c3df8e156adda176c30a6bbbe" address="unix:///run/containerd/s/e6ca6d7af1ccf119f2dcc8f5b49156bb1c05cbe8fcf072a2d55acc0a30b86d67" protocol=ttrpc version=3 Sep 12 00:18:10.010876 systemd[1]: Started cri-containerd-821a327e2584f9c71bdd1d6c10c5f94a7c1c194c3df8e156adda176c30a6bbbe.scope - libcontainer container 821a327e2584f9c71bdd1d6c10c5f94a7c1c194c3df8e156adda176c30a6bbbe. Sep 12 00:18:10.213069 containerd[1588]: time="2025-09-12T00:18:10.212939974Z" level=info msg="StartContainer for \"821a327e2584f9c71bdd1d6c10c5f94a7c1c194c3df8e156adda176c30a6bbbe\" returns successfully" Sep 12 00:18:10.266994 systemd-networkd[1478]: calie652a8872d0: Gained IPv6LL Sep 12 00:18:10.409485 containerd[1588]: time="2025-09-12T00:18:10.409146613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c944b767b-95z2p,Uid:9786ace3-0bf8-4cca-8b2e-49b4a075c0b6,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:10.409663 containerd[1588]: time="2025-09-12T00:18:10.409609501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-v4pxb,Uid:f6080d34-27a9-469d-9690-545a2c483565,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:10.409892 containerd[1588]: time="2025-09-12T00:18:10.409836397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n85pl,Uid:076fcf6e-1869-4af3-96de-c12eddd0a2fa,Namespace:calico-system,Attempt:0,}" Sep 12 00:18:10.413824 kubelet[2728]: E0912 00:18:10.413794 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:10.414009 kubelet[2728]: E0912 00:18:10.413958 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:10.459904 systemd-networkd[1478]: cali73b6153ac14: Gained IPv6LL Sep 12 00:18:10.533349 systemd-networkd[1478]: cali4b26a28b19b: Link UP Sep 12 00:18:10.534291 systemd-networkd[1478]: cali4b26a28b19b: Gained carrier Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.452 [INFO][4686] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0 calico-kube-controllers-5c944b767b- calico-system 9786ace3-0bf8-4cca-8b2e-49b4a075c0b6 869 0 2025-09-12 00:17:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c944b767b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5c944b767b-95z2p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4b26a28b19b [] [] }} ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Namespace="calico-system" Pod="calico-kube-controllers-5c944b767b-95z2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.452 [INFO][4686] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Namespace="calico-system" Pod="calico-kube-controllers-5c944b767b-95z2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.497 [INFO][4729] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" HandleID="k8s-pod-network.29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Workload="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.498 [INFO][4729] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" HandleID="k8s-pod-network.29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Workload="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f640), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5c944b767b-95z2p", "timestamp":"2025-09-12 00:18:10.497828401 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.498 [INFO][4729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.498 [INFO][4729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.498 [INFO][4729] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.504 [INFO][4729] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" host="localhost" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.509 [INFO][4729] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.513 [INFO][4729] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.515 [INFO][4729] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.517 [INFO][4729] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.517 [INFO][4729] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" host="localhost" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.518 [INFO][4729] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.522 [INFO][4729] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" host="localhost" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.528 [INFO][4729] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" host="localhost" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.528 [INFO][4729] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" host="localhost" Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.528 [INFO][4729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:10.546103 containerd[1588]: 2025-09-12 00:18:10.528 [INFO][4729] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" HandleID="k8s-pod-network.29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Workload="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0" Sep 12 00:18:10.547157 containerd[1588]: 2025-09-12 00:18:10.531 [INFO][4686] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Namespace="calico-system" Pod="calico-kube-controllers-5c944b767b-95z2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0", GenerateName:"calico-kube-controllers-5c944b767b-", Namespace:"calico-system", SelfLink:"", UID:"9786ace3-0bf8-4cca-8b2e-49b4a075c0b6", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c944b767b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5c944b767b-95z2p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b26a28b19b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:10.547157 containerd[1588]: 2025-09-12 00:18:10.531 [INFO][4686] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Namespace="calico-system" Pod="calico-kube-controllers-5c944b767b-95z2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0" Sep 12 00:18:10.547157 containerd[1588]: 2025-09-12 00:18:10.531 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b26a28b19b ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Namespace="calico-system" Pod="calico-kube-controllers-5c944b767b-95z2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0" Sep 12 00:18:10.547157 containerd[1588]: 2025-09-12 00:18:10.534 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Namespace="calico-system" Pod="calico-kube-controllers-5c944b767b-95z2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0" Sep 12 00:18:10.547157 containerd[1588]: 2025-09-12 00:18:10.535 [INFO][4686] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Namespace="calico-system" Pod="calico-kube-controllers-5c944b767b-95z2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0", GenerateName:"calico-kube-controllers-5c944b767b-", Namespace:"calico-system", SelfLink:"", UID:"9786ace3-0bf8-4cca-8b2e-49b4a075c0b6", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c944b767b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a", Pod:"calico-kube-controllers-5c944b767b-95z2p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b26a28b19b", MAC:"fe:62:b6:a0:3e:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:10.547157 containerd[1588]: 2025-09-12 00:18:10.543 [INFO][4686] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" Namespace="calico-system" Pod="calico-kube-controllers-5c944b767b-95z2p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c944b767b--95z2p-eth0" Sep 12 00:18:10.589849 containerd[1588]: time="2025-09-12T00:18:10.589777778Z" level=info msg="connecting to shim 29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a" address="unix:///run/containerd/s/06f90d138913b437ce1b9c5ef4012fd488626b73480b250a62feeec7054238da" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:10.617938 systemd[1]: Started cri-containerd-29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a.scope - libcontainer container 29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a. Sep 12 00:18:10.633496 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:10.644749 systemd-networkd[1478]: calib6baae22303: Link UP Sep 12 00:18:10.645705 systemd-networkd[1478]: calib6baae22303: Gained carrier Sep 12 00:18:10.652029 systemd-networkd[1478]: cali8dd0f0b0ba6: Gained IPv6LL Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.464 [INFO][4709] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--n85pl-eth0 csi-node-driver- calico-system 076fcf6e-1869-4af3-96de-c12eddd0a2fa 748 0 2025-09-12 00:17:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-n85pl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib6baae22303 [] [] }} ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Namespace="calico-system" Pod="csi-node-driver-n85pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--n85pl-" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.464 [INFO][4709] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Namespace="calico-system" Pod="csi-node-driver-n85pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--n85pl-eth0" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.502 [INFO][4735] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" HandleID="k8s-pod-network.1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Workload="localhost-k8s-csi--node--driver--n85pl-eth0" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.502 [INFO][4735] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" HandleID="k8s-pod-network.1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Workload="localhost-k8s-csi--node--driver--n85pl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-n85pl", "timestamp":"2025-09-12 00:18:10.50227976 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.502 [INFO][4735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.528 [INFO][4735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.528 [INFO][4735] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.605 [INFO][4735] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" host="localhost" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.611 [INFO][4735] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.617 [INFO][4735] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.619 [INFO][4735] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.621 [INFO][4735] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.621 [INFO][4735] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" host="localhost" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.622 [INFO][4735] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.626 [INFO][4735] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" host="localhost" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.632 [INFO][4735] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" host="localhost" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.632 [INFO][4735] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" host="localhost" Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.632 [INFO][4735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:10.666376 containerd[1588]: 2025-09-12 00:18:10.632 [INFO][4735] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" HandleID="k8s-pod-network.1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Workload="localhost-k8s-csi--node--driver--n85pl-eth0" Sep 12 00:18:10.667315 containerd[1588]: 2025-09-12 00:18:10.639 [INFO][4709] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Namespace="calico-system" Pod="csi-node-driver-n85pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--n85pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n85pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"076fcf6e-1869-4af3-96de-c12eddd0a2fa", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-n85pl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6baae22303", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:10.667315 containerd[1588]: 2025-09-12 00:18:10.640 [INFO][4709] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Namespace="calico-system" Pod="csi-node-driver-n85pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--n85pl-eth0" Sep 12 00:18:10.667315 containerd[1588]: 2025-09-12 00:18:10.640 [INFO][4709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6baae22303 ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Namespace="calico-system" Pod="csi-node-driver-n85pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--n85pl-eth0" Sep 12 00:18:10.667315 containerd[1588]: 2025-09-12 00:18:10.646 [INFO][4709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Namespace="calico-system" Pod="csi-node-driver-n85pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--n85pl-eth0" Sep 12 00:18:10.667315 containerd[1588]: 2025-09-12 00:18:10.648 [INFO][4709] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Namespace="calico-system" Pod="csi-node-driver-n85pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--n85pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n85pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"076fcf6e-1869-4af3-96de-c12eddd0a2fa", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d", Pod:"csi-node-driver-n85pl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib6baae22303", MAC:"e2:1a:14:7d:c8:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:10.667315 containerd[1588]: 2025-09-12 00:18:10.658 [INFO][4709] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" Namespace="calico-system" Pod="csi-node-driver-n85pl" WorkloadEndpoint="localhost-k8s-csi--node--driver--n85pl-eth0" Sep 12 00:18:10.679779 containerd[1588]: time="2025-09-12T00:18:10.679695955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c944b767b-95z2p,Uid:9786ace3-0bf8-4cca-8b2e-49b4a075c0b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a\"" Sep 12 00:18:10.696158 containerd[1588]: time="2025-09-12T00:18:10.696117257Z" level=info msg="connecting to shim 1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d" address="unix:///run/containerd/s/a68304ec979af0cfc35b21ace50b0b77bdfda7a8df2ef2ee884d448c75d111ee" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:10.724108 systemd[1]: Started cri-containerd-1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d.scope - libcontainer container 1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d. Sep 12 00:18:10.738642 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:10.747841 systemd-networkd[1478]: cali25d366cf949: Link UP Sep 12 00:18:10.749017 systemd-networkd[1478]: cali25d366cf949: Gained carrier Sep 12 00:18:10.757885 containerd[1588]: time="2025-09-12T00:18:10.757854755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n85pl,Uid:076fcf6e-1869-4af3-96de-c12eddd0a2fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d\"" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.476 [INFO][4698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--v4pxb-eth0 goldmane-54d579b49d- calico-system f6080d34-27a9-469d-9690-545a2c483565 871 0 2025-09-12 00:17:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-v4pxb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali25d366cf949 [] [] }} ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Namespace="calico-system" Pod="goldmane-54d579b49d-v4pxb" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--v4pxb-" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.477 [INFO][4698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Namespace="calico-system" Pod="goldmane-54d579b49d-v4pxb" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--v4pxb-eth0" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.516 [INFO][4743] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" HandleID="k8s-pod-network.3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Workload="localhost-k8s-goldmane--54d579b49d--v4pxb-eth0" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.517 [INFO][4743] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" HandleID="k8s-pod-network.3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Workload="localhost-k8s-goldmane--54d579b49d--v4pxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-v4pxb", "timestamp":"2025-09-12 00:18:10.516400707 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.517 [INFO][4743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.632 [INFO][4743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.632 [INFO][4743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.705 [INFO][4743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" host="localhost" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.714 [INFO][4743] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.720 [INFO][4743] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.722 [INFO][4743] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.724 [INFO][4743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.724 [INFO][4743] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" host="localhost" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.726 [INFO][4743] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2 Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.730 [INFO][4743] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" host="localhost" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.737 [INFO][4743] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" host="localhost" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.737 [INFO][4743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" host="localhost" Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.737 [INFO][4743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 00:18:10.766932 containerd[1588]: 2025-09-12 00:18:10.737 [INFO][4743] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" HandleID="k8s-pod-network.3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Workload="localhost-k8s-goldmane--54d579b49d--v4pxb-eth0" Sep 12 00:18:10.767576 containerd[1588]: 2025-09-12 00:18:10.742 [INFO][4698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Namespace="calico-system" Pod="goldmane-54d579b49d-v4pxb" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--v4pxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--v4pxb-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f6080d34-27a9-469d-9690-545a2c483565", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-v4pxb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali25d366cf949", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:10.767576 containerd[1588]: 2025-09-12 00:18:10.742 [INFO][4698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Namespace="calico-system" Pod="goldmane-54d579b49d-v4pxb" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--v4pxb-eth0" Sep 12 00:18:10.767576 containerd[1588]: 2025-09-12 00:18:10.742 [INFO][4698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25d366cf949 ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Namespace="calico-system" Pod="goldmane-54d579b49d-v4pxb" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--v4pxb-eth0" Sep 12 00:18:10.767576 containerd[1588]: 2025-09-12 00:18:10.749 [INFO][4698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Namespace="calico-system" Pod="goldmane-54d579b49d-v4pxb" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--v4pxb-eth0" Sep 12 00:18:10.767576 containerd[1588]: 2025-09-12 00:18:10.750 [INFO][4698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Namespace="calico-system" Pod="goldmane-54d579b49d-v4pxb" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--v4pxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--v4pxb-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f6080d34-27a9-469d-9690-545a2c483565", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2", Pod:"goldmane-54d579b49d-v4pxb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali25d366cf949", MAC:"52:d4:b0:5e:72:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 00:18:10.767576 containerd[1588]: 2025-09-12 00:18:10.763 [INFO][4698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" Namespace="calico-system" Pod="goldmane-54d579b49d-v4pxb" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--v4pxb-eth0" Sep 12 00:18:10.789222 containerd[1588]: time="2025-09-12T00:18:10.789107155Z" level=info msg="connecting to shim 3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2" address="unix:///run/containerd/s/de77eb948e6da9bebba60118bc58560ea647c7d6ab061620943578e9c38420f1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 00:18:10.818082 systemd[1]: Started cri-containerd-3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2.scope - libcontainer container 3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2. Sep 12 00:18:10.834017 systemd-resolved[1406]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 00:18:10.869954 containerd[1588]: time="2025-09-12T00:18:10.869891651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-v4pxb,Uid:f6080d34-27a9-469d-9690-545a2c483565,Namespace:calico-system,Attempt:0,} returns sandbox id \"3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2\"" Sep 12 00:18:11.419449 kubelet[2728]: E0912 00:18:11.419400 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:11.562943 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:33088.service - OpenSSH per-connection server daemon (10.0.0.1:33088). Sep 12 00:18:11.629468 sshd[4923]: Accepted publickey for core from 10.0.0.1 port 33088 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:11.631318 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:11.636473 systemd-logind[1570]: New session 9 of user core. Sep 12 00:18:11.650943 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 00:18:11.773329 sshd[4926]: Connection closed by 10.0.0.1 port 33088 Sep 12 00:18:11.773543 sshd-session[4923]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:11.778519 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:33088.service: Deactivated successfully. Sep 12 00:18:11.781132 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 00:18:11.781979 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Sep 12 00:18:11.783826 systemd-logind[1570]: Removed session 9. Sep 12 00:18:11.930974 systemd-networkd[1478]: calib6baae22303: Gained IPv6LL Sep 12 00:18:12.123034 systemd-networkd[1478]: cali25d366cf949: Gained IPv6LL Sep 12 00:18:12.186929 systemd-networkd[1478]: cali4b26a28b19b: Gained IPv6LL Sep 12 00:18:12.996480 containerd[1588]: time="2025-09-12T00:18:12.996401898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:12.998086 containerd[1588]: time="2025-09-12T00:18:12.998030724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 00:18:13.001763 containerd[1588]: time="2025-09-12T00:18:13.001737657Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:13.004522 containerd[1588]: time="2025-09-12T00:18:13.004448581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:13.005025 containerd[1588]: time="2025-09-12T00:18:13.004993193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.040605022s" Sep 12 00:18:13.005025 containerd[1588]: time="2025-09-12T00:18:13.005022558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 00:18:13.006414 containerd[1588]: time="2025-09-12T00:18:13.006372049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 00:18:13.007143 containerd[1588]: time="2025-09-12T00:18:13.007069096Z" level=info msg="CreateContainer within sandbox \"87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 00:18:13.020207 containerd[1588]: time="2025-09-12T00:18:13.020153246Z" level=info msg="Container 5006dcd789bee8fd7c282ec2186e3e1708173f4b4f87a07aa6c8bca12dc00dc4: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:13.029837 containerd[1588]: time="2025-09-12T00:18:13.029787878Z" level=info msg="CreateContainer within sandbox \"87a504645343d75b8c5b318b374f601e8b2111cd0a4cba7c37bd8ec1dde99527\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5006dcd789bee8fd7c282ec2186e3e1708173f4b4f87a07aa6c8bca12dc00dc4\"" Sep 12 00:18:13.030344 containerd[1588]: time="2025-09-12T00:18:13.030314616Z" level=info msg="StartContainer for \"5006dcd789bee8fd7c282ec2186e3e1708173f4b4f87a07aa6c8bca12dc00dc4\"" Sep 12 00:18:13.031306 containerd[1588]: time="2025-09-12T00:18:13.031284484Z" level=info msg="connecting to shim 5006dcd789bee8fd7c282ec2186e3e1708173f4b4f87a07aa6c8bca12dc00dc4" address="unix:///run/containerd/s/564a6a70a278c5fe9eff2f18c0838a2549cd77cce1f23bd14fcbf53fbabf0b2b" protocol=ttrpc version=3 Sep 12 00:18:13.065876 systemd[1]: Started cri-containerd-5006dcd789bee8fd7c282ec2186e3e1708173f4b4f87a07aa6c8bca12dc00dc4.scope - libcontainer container 5006dcd789bee8fd7c282ec2186e3e1708173f4b4f87a07aa6c8bca12dc00dc4. Sep 12 00:18:13.113638 containerd[1588]: time="2025-09-12T00:18:13.113588816Z" level=info msg="StartContainer for \"5006dcd789bee8fd7c282ec2186e3e1708173f4b4f87a07aa6c8bca12dc00dc4\" returns successfully" Sep 12 00:18:13.420467 containerd[1588]: time="2025-09-12T00:18:13.420391671Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:13.421085 containerd[1588]: time="2025-09-12T00:18:13.421044355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 00:18:13.422904 containerd[1588]: time="2025-09-12T00:18:13.422879507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 416.479466ms" Sep 12 00:18:13.422969 containerd[1588]: time="2025-09-12T00:18:13.422909753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 00:18:13.425784 containerd[1588]: time="2025-09-12T00:18:13.425735744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 00:18:13.427851 containerd[1588]: time="2025-09-12T00:18:13.427818441Z" level=info msg="CreateContainer within sandbox \"637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 00:18:13.438741 containerd[1588]: time="2025-09-12T00:18:13.438440643Z" level=info msg="Container 16089c5fd6b20c45c0fd52514184184afe56e4496a37dd18833b93b303798db9: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:13.441603 kubelet[2728]: I0912 00:18:13.441521 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77d8f65979-w66d2" podStartSLOduration=29.338888619 podStartE2EDuration="33.441496455s" podCreationTimestamp="2025-09-12 00:17:40 +0000 UTC" firstStartedPulling="2025-09-12 00:18:08.903132428 +0000 UTC m=+40.606866397" lastFinishedPulling="2025-09-12 00:18:13.005740264 +0000 UTC m=+44.709474233" observedRunningTime="2025-09-12 00:18:13.440323606 +0000 UTC m=+45.144057595" watchObservedRunningTime="2025-09-12 00:18:13.441496455 +0000 UTC m=+45.145230434" Sep 12 00:18:13.457844 containerd[1588]: time="2025-09-12T00:18:13.457796408Z" level=info msg="CreateContainer within sandbox \"637b95d56cfa6c670ce3bf0dc1e59663fbe7294ac278ce86ab17a68f3a19d887\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"16089c5fd6b20c45c0fd52514184184afe56e4496a37dd18833b93b303798db9\"" Sep 12 00:18:13.458732 containerd[1588]: time="2025-09-12T00:18:13.458651131Z" level=info msg="StartContainer for \"16089c5fd6b20c45c0fd52514184184afe56e4496a37dd18833b93b303798db9\"" Sep 12 00:18:13.460509 containerd[1588]: time="2025-09-12T00:18:13.460455906Z" level=info msg="connecting to shim 16089c5fd6b20c45c0fd52514184184afe56e4496a37dd18833b93b303798db9" address="unix:///run/containerd/s/226339c3952c6d61ef3a95b314a2bbc80a9dc3f597844d80794f8bd84ec6774a" protocol=ttrpc version=3 Sep 12 00:18:13.482915 systemd[1]: Started cri-containerd-16089c5fd6b20c45c0fd52514184184afe56e4496a37dd18833b93b303798db9.scope - libcontainer container 16089c5fd6b20c45c0fd52514184184afe56e4496a37dd18833b93b303798db9. Sep 12 00:18:13.544968 containerd[1588]: time="2025-09-12T00:18:13.544905902Z" level=info msg="StartContainer for \"16089c5fd6b20c45c0fd52514184184afe56e4496a37dd18833b93b303798db9\" returns successfully" Sep 12 00:18:14.429997 kubelet[2728]: I0912 00:18:14.429961 2728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:18:14.443427 kubelet[2728]: I0912 00:18:14.443339 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77d8f65979-kwb72" podStartSLOduration=30.676236545 podStartE2EDuration="34.443310498s" podCreationTimestamp="2025-09-12 00:17:40 +0000 UTC" firstStartedPulling="2025-09-12 00:18:09.656563436 +0000 UTC m=+41.360297405" lastFinishedPulling="2025-09-12 00:18:13.423637389 +0000 UTC m=+45.127371358" observedRunningTime="2025-09-12 00:18:14.443100704 +0000 UTC m=+46.146834693" watchObservedRunningTime="2025-09-12 00:18:14.443310498 +0000 UTC m=+46.147044467" Sep 12 00:18:15.432085 kubelet[2728]: I0912 00:18:15.432026 2728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:18:16.093365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780969731.mount: Deactivated successfully. Sep 12 00:18:16.180086 containerd[1588]: time="2025-09-12T00:18:16.180012126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:16.180862 containerd[1588]: time="2025-09-12T00:18:16.180822185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 00:18:16.182126 containerd[1588]: time="2025-09-12T00:18:16.182087007Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:16.184298 containerd[1588]: time="2025-09-12T00:18:16.184269531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:16.184864 containerd[1588]: time="2025-09-12T00:18:16.184836014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.759064473s" Sep 12 00:18:16.184909 containerd[1588]: time="2025-09-12T00:18:16.184868384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 00:18:16.185858 containerd[1588]: time="2025-09-12T00:18:16.185827423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 00:18:16.187193 containerd[1588]: time="2025-09-12T00:18:16.187162437Z" level=info msg="CreateContainer within sandbox \"d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 00:18:16.195452 containerd[1588]: time="2025-09-12T00:18:16.195410296Z" level=info msg="Container 5f8e8ece077f83528c00890636c2d14810013a8d4b3f3cd2ae8e4929fe7ac3a1: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:16.209287 containerd[1588]: time="2025-09-12T00:18:16.209218344Z" level=info msg="CreateContainer within sandbox \"d68572e8179f6134a9a24125ff02534660f9b073aec049609dfec52829071425\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5f8e8ece077f83528c00890636c2d14810013a8d4b3f3cd2ae8e4929fe7ac3a1\"" Sep 12 00:18:16.209892 containerd[1588]: time="2025-09-12T00:18:16.209827978Z" level=info msg="StartContainer for \"5f8e8ece077f83528c00890636c2d14810013a8d4b3f3cd2ae8e4929fe7ac3a1\"" Sep 12 00:18:16.211097 containerd[1588]: time="2025-09-12T00:18:16.211055279Z" level=info msg="connecting to shim 5f8e8ece077f83528c00890636c2d14810013a8d4b3f3cd2ae8e4929fe7ac3a1" address="unix:///run/containerd/s/e6ca6d7af1ccf119f2dcc8f5b49156bb1c05cbe8fcf072a2d55acc0a30b86d67" protocol=ttrpc version=3 Sep 12 00:18:16.234870 systemd[1]: Started cri-containerd-5f8e8ece077f83528c00890636c2d14810013a8d4b3f3cd2ae8e4929fe7ac3a1.scope - libcontainer container 5f8e8ece077f83528c00890636c2d14810013a8d4b3f3cd2ae8e4929fe7ac3a1. Sep 12 00:18:16.284655 containerd[1588]: time="2025-09-12T00:18:16.284594568Z" level=info msg="StartContainer for \"5f8e8ece077f83528c00890636c2d14810013a8d4b3f3cd2ae8e4929fe7ac3a1\" returns successfully" Sep 12 00:18:16.787578 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:33090.service - OpenSSH per-connection server daemon (10.0.0.1:33090). Sep 12 00:18:16.841694 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 33090 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:16.843971 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:16.848816 systemd-logind[1570]: New session 10 of user core. Sep 12 00:18:16.861884 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 00:18:16.996018 sshd[5079]: Connection closed by 10.0.0.1 port 33090 Sep 12 00:18:16.996339 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:17.001166 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:33090.service: Deactivated successfully. Sep 12 00:18:17.003469 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 00:18:17.005340 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Sep 12 00:18:17.006487 systemd-logind[1570]: Removed session 10. Sep 12 00:18:18.565692 kubelet[2728]: I0912 00:18:18.564814 2728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:18:18.613128 kubelet[2728]: I0912 00:18:18.613014 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bfb5fcc5f-lcllm" podStartSLOduration=3.642319138 podStartE2EDuration="11.612991453s" podCreationTimestamp="2025-09-12 00:18:07 +0000 UTC" firstStartedPulling="2025-09-12 00:18:08.21496436 +0000 UTC m=+39.918698329" lastFinishedPulling="2025-09-12 00:18:16.185636675 +0000 UTC m=+47.889370644" observedRunningTime="2025-09-12 00:18:16.448946071 +0000 UTC m=+48.152680040" watchObservedRunningTime="2025-09-12 00:18:18.612991453 +0000 UTC m=+50.316725422" Sep 12 00:18:20.531037 containerd[1588]: time="2025-09-12T00:18:20.530962000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:20.532300 containerd[1588]: time="2025-09-12T00:18:20.532258040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 00:18:20.533882 containerd[1588]: time="2025-09-12T00:18:20.533840829Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:20.539047 containerd[1588]: time="2025-09-12T00:18:20.538987131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:20.539582 containerd[1588]: time="2025-09-12T00:18:20.539541821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.35368893s" Sep 12 00:18:20.539582 containerd[1588]: time="2025-09-12T00:18:20.539572959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 00:18:20.545200 containerd[1588]: time="2025-09-12T00:18:20.545168053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 00:18:20.562178 containerd[1588]: time="2025-09-12T00:18:20.562131980Z" level=info msg="CreateContainer within sandbox \"29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 00:18:20.571772 containerd[1588]: time="2025-09-12T00:18:20.571733808Z" level=info msg="Container f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:20.581080 containerd[1588]: time="2025-09-12T00:18:20.581023531Z" level=info msg="CreateContainer within sandbox \"29dc9f2c0ea1d1e9aedad1a43428826856da988fd817b74f1290b7b2d4d1688a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\"" Sep 12 00:18:20.583047 containerd[1588]: time="2025-09-12T00:18:20.583016680Z" level=info msg="StartContainer for \"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\"" Sep 12 00:18:20.583995 containerd[1588]: time="2025-09-12T00:18:20.583967764Z" level=info msg="connecting to shim f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d" address="unix:///run/containerd/s/06f90d138913b437ce1b9c5ef4012fd488626b73480b250a62feeec7054238da" protocol=ttrpc version=3 Sep 12 00:18:20.607878 systemd[1]: Started cri-containerd-f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d.scope - libcontainer container f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d. Sep 12 00:18:20.666379 containerd[1588]: time="2025-09-12T00:18:20.666334043Z" level=info msg="StartContainer for \"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" returns successfully" Sep 12 00:18:21.582554 containerd[1588]: time="2025-09-12T00:18:21.582510967Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"133c223eaab550af7f55786dd13ffccd4979cf0d6eb0be1a9068205497550615\" pid:5163 exited_at:{seconds:1757636301 nanos:582249367}" Sep 12 00:18:21.609833 kubelet[2728]: I0912 00:18:21.609745 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c944b767b-95z2p" podStartSLOduration=28.746542263 podStartE2EDuration="38.609728263s" podCreationTimestamp="2025-09-12 00:17:43 +0000 UTC" firstStartedPulling="2025-09-12 00:18:10.681559601 +0000 UTC m=+42.385293570" lastFinishedPulling="2025-09-12 00:18:20.544745611 +0000 UTC m=+52.248479570" observedRunningTime="2025-09-12 00:18:21.549368819 +0000 UTC m=+53.253102788" watchObservedRunningTime="2025-09-12 00:18:21.609728263 +0000 UTC m=+53.313462232" Sep 12 00:18:22.009555 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:49728.service - OpenSSH per-connection server daemon (10.0.0.1:49728). Sep 12 00:18:22.096419 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 49728 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:22.098360 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:22.104164 systemd-logind[1570]: New session 11 of user core. Sep 12 00:18:22.113904 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 00:18:22.165790 containerd[1588]: time="2025-09-12T00:18:22.165545103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:22.166397 containerd[1588]: time="2025-09-12T00:18:22.166373236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 00:18:22.167768 containerd[1588]: time="2025-09-12T00:18:22.167690346Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:22.171129 containerd[1588]: time="2025-09-12T00:18:22.170176709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:22.173331 containerd[1588]: time="2025-09-12T00:18:22.173287895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.628090406s" Sep 12 00:18:22.173451 containerd[1588]: time="2025-09-12T00:18:22.173426905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 00:18:22.176182 containerd[1588]: time="2025-09-12T00:18:22.176154581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 00:18:22.177077 containerd[1588]: time="2025-09-12T00:18:22.177050631Z" level=info msg="CreateContainer within sandbox \"1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 00:18:22.324977 containerd[1588]: time="2025-09-12T00:18:22.324869687Z" level=info msg="Container 4f8a1ce41dc8bc9bf05c1a771cae035ed06f1b4ced9ef06092638c7d092c11c6: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:22.335822 containerd[1588]: time="2025-09-12T00:18:22.335781391Z" level=info msg="CreateContainer within sandbox \"1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4f8a1ce41dc8bc9bf05c1a771cae035ed06f1b4ced9ef06092638c7d092c11c6\"" Sep 12 00:18:22.338433 containerd[1588]: time="2025-09-12T00:18:22.336438895Z" level=info msg="StartContainer for \"4f8a1ce41dc8bc9bf05c1a771cae035ed06f1b4ced9ef06092638c7d092c11c6\"" Sep 12 00:18:22.338433 containerd[1588]: time="2025-09-12T00:18:22.337827880Z" level=info msg="connecting to shim 4f8a1ce41dc8bc9bf05c1a771cae035ed06f1b4ced9ef06092638c7d092c11c6" address="unix:///run/containerd/s/a68304ec979af0cfc35b21ace50b0b77bdfda7a8df2ef2ee884d448c75d111ee" protocol=ttrpc version=3 Sep 12 00:18:22.338545 sshd[5183]: Connection closed by 10.0.0.1 port 49728 Sep 12 00:18:22.339173 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:22.353295 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:49728.service: Deactivated successfully. Sep 12 00:18:22.356209 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 00:18:22.358303 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Sep 12 00:18:22.380958 systemd[1]: Started cri-containerd-4f8a1ce41dc8bc9bf05c1a771cae035ed06f1b4ced9ef06092638c7d092c11c6.scope - libcontainer container 4f8a1ce41dc8bc9bf05c1a771cae035ed06f1b4ced9ef06092638c7d092c11c6. Sep 12 00:18:22.382678 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:49742.service - OpenSSH per-connection server daemon (10.0.0.1:49742). Sep 12 00:18:22.384965 systemd-logind[1570]: Removed session 11. Sep 12 00:18:22.424564 containerd[1588]: time="2025-09-12T00:18:22.424520712Z" level=info msg="StartContainer for \"4f8a1ce41dc8bc9bf05c1a771cae035ed06f1b4ced9ef06092638c7d092c11c6\" returns successfully" Sep 12 00:18:22.432675 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 49742 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:22.434335 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:22.438660 systemd-logind[1570]: New session 12 of user core. Sep 12 00:18:22.448842 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 00:18:22.599204 sshd[5231]: Connection closed by 10.0.0.1 port 49742 Sep 12 00:18:22.599470 sshd-session[5210]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:22.613224 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:49742.service: Deactivated successfully. Sep 12 00:18:22.617616 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 00:18:22.618781 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Sep 12 00:18:22.624125 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:49752.service - OpenSSH per-connection server daemon (10.0.0.1:49752). Sep 12 00:18:22.625042 systemd-logind[1570]: Removed session 12. Sep 12 00:18:22.674420 sshd[5242]: Accepted publickey for core from 10.0.0.1 port 49752 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:22.676040 sshd-session[5242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:22.680361 systemd-logind[1570]: New session 13 of user core. Sep 12 00:18:22.690842 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 00:18:22.807810 sshd[5244]: Connection closed by 10.0.0.1 port 49752 Sep 12 00:18:22.808133 sshd-session[5242]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:22.812126 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:49752.service: Deactivated successfully. Sep 12 00:18:22.814742 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 00:18:22.815580 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Sep 12 00:18:22.817808 systemd-logind[1570]: Removed session 13. Sep 12 00:18:23.615502 kubelet[2728]: I0912 00:18:23.615434 2728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 00:18:24.864145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060918254.mount: Deactivated successfully. Sep 12 00:18:25.405846 containerd[1588]: time="2025-09-12T00:18:25.405758874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:25.406539 containerd[1588]: time="2025-09-12T00:18:25.406501987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 00:18:25.407699 containerd[1588]: time="2025-09-12T00:18:25.407662154Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:25.409598 containerd[1588]: time="2025-09-12T00:18:25.409558230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:25.410411 containerd[1588]: time="2025-09-12T00:18:25.410391563Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.234206935s" Sep 12 00:18:25.410461 containerd[1588]: time="2025-09-12T00:18:25.410415818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 00:18:25.411295 containerd[1588]: time="2025-09-12T00:18:25.411271924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 00:18:25.412737 containerd[1588]: time="2025-09-12T00:18:25.412666780Z" level=info msg="CreateContainer within sandbox \"3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 00:18:25.421517 containerd[1588]: time="2025-09-12T00:18:25.421470290Z" level=info msg="Container 8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:25.429727 containerd[1588]: time="2025-09-12T00:18:25.429683734Z" level=info msg="CreateContainer within sandbox \"3fd79e451b132daf41e8787251d643f6865cf2f8b025100262b95b092be396f2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\"" Sep 12 00:18:25.430518 containerd[1588]: time="2025-09-12T00:18:25.430490247Z" level=info msg="StartContainer for \"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\"" Sep 12 00:18:25.431826 containerd[1588]: time="2025-09-12T00:18:25.431787540Z" level=info msg="connecting to shim 8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768" address="unix:///run/containerd/s/de77eb948e6da9bebba60118bc58560ea647c7d6ab061620943578e9c38420f1" protocol=ttrpc version=3 Sep 12 00:18:25.456930 systemd[1]: Started cri-containerd-8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768.scope - libcontainer container 8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768. Sep 12 00:18:25.508804 containerd[1588]: time="2025-09-12T00:18:25.508697614Z" level=info msg="StartContainer for \"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" returns successfully" Sep 12 00:18:25.568519 kubelet[2728]: I0912 00:18:25.568355 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-v4pxb" podStartSLOduration=29.02872035 podStartE2EDuration="43.568336865s" podCreationTimestamp="2025-09-12 00:17:42 +0000 UTC" firstStartedPulling="2025-09-12 00:18:10.87153902 +0000 UTC m=+42.575272999" lastFinishedPulling="2025-09-12 00:18:25.411155535 +0000 UTC m=+57.114889514" observedRunningTime="2025-09-12 00:18:25.568176565 +0000 UTC m=+57.271910524" watchObservedRunningTime="2025-09-12 00:18:25.568336865 +0000 UTC m=+57.272070835" Sep 12 00:18:25.626477 containerd[1588]: time="2025-09-12T00:18:25.626436039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"d8de8a9daa9f11699510f34f90a1b4e43bbc7d8e019b792b89c4bf421f42a24f\" pid:5317 exit_status:1 exited_at:{seconds:1757636305 nanos:626075112}" Sep 12 00:18:26.629998 containerd[1588]: time="2025-09-12T00:18:26.629954575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"c060a0c12732c5844ac3b5c20bac853b8ca65eca450d2c1af8ff96a5a095a261\" pid:5341 exit_status:1 exited_at:{seconds:1757636306 nanos:629629212}" Sep 12 00:18:27.626386 containerd[1588]: time="2025-09-12T00:18:27.626297745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"48677326a12a541a1d066168ebfe86fe945f0f2bb2582b5f317f198e654cbebe\" pid:5365 exit_status:1 exited_at:{seconds:1757636307 nanos:625922988}" Sep 12 00:18:27.821028 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:49762.service - OpenSSH per-connection server daemon (10.0.0.1:49762). Sep 12 00:18:27.882341 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 49762 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:27.913425 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:27.917989 systemd-logind[1570]: New session 14 of user core. Sep 12 00:18:27.929872 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 00:18:28.071973 sshd[5380]: Connection closed by 10.0.0.1 port 49762 Sep 12 00:18:28.072891 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:28.079043 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:49762.service: Deactivated successfully. Sep 12 00:18:28.081866 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 00:18:28.083387 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Sep 12 00:18:28.091521 systemd-logind[1570]: Removed session 14. Sep 12 00:18:28.274937 containerd[1588]: time="2025-09-12T00:18:28.274797532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:28.276238 containerd[1588]: time="2025-09-12T00:18:28.276191345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 00:18:28.277388 containerd[1588]: time="2025-09-12T00:18:28.277348640Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:28.279332 containerd[1588]: time="2025-09-12T00:18:28.279275438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 00:18:28.279872 containerd[1588]: time="2025-09-12T00:18:28.279825053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.868525127s" Sep 12 00:18:28.279872 containerd[1588]: time="2025-09-12T00:18:28.279866835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 00:18:28.282264 containerd[1588]: time="2025-09-12T00:18:28.282231541Z" level=info msg="CreateContainer within sandbox \"1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 00:18:28.290378 containerd[1588]: time="2025-09-12T00:18:28.290344197Z" level=info msg="Container d063dfbf3a7854b8ce5f169bb1a95faed4b5924288671522c32dde9c4a9ca54e: CDI devices from CRI Config.CDIDevices: []" Sep 12 00:18:28.300549 containerd[1588]: time="2025-09-12T00:18:28.300502971Z" level=info msg="CreateContainer within sandbox \"1bc4476e30f374fd603bfde0595df2d67b05fa5233e910b087e60b1f386b9f6d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d063dfbf3a7854b8ce5f169bb1a95faed4b5924288671522c32dde9c4a9ca54e\"" Sep 12 00:18:28.301224 containerd[1588]: time="2025-09-12T00:18:28.301173682Z" level=info msg="StartContainer for \"d063dfbf3a7854b8ce5f169bb1a95faed4b5924288671522c32dde9c4a9ca54e\"" Sep 12 00:18:28.303140 containerd[1588]: time="2025-09-12T00:18:28.303108766Z" level=info msg="connecting to shim d063dfbf3a7854b8ce5f169bb1a95faed4b5924288671522c32dde9c4a9ca54e" address="unix:///run/containerd/s/a68304ec979af0cfc35b21ace50b0b77bdfda7a8df2ef2ee884d448c75d111ee" protocol=ttrpc version=3 Sep 12 00:18:28.327879 systemd[1]: Started cri-containerd-d063dfbf3a7854b8ce5f169bb1a95faed4b5924288671522c32dde9c4a9ca54e.scope - libcontainer container d063dfbf3a7854b8ce5f169bb1a95faed4b5924288671522c32dde9c4a9ca54e. Sep 12 00:18:28.392853 containerd[1588]: time="2025-09-12T00:18:28.392765277Z" level=info msg="StartContainer for \"d063dfbf3a7854b8ce5f169bb1a95faed4b5924288671522c32dde9c4a9ca54e\" returns successfully" Sep 12 00:18:28.470755 kubelet[2728]: I0912 00:18:28.470419 2728 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 00:18:28.470755 kubelet[2728]: I0912 00:18:28.470464 2728 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 00:18:33.086451 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:56734.service - OpenSSH per-connection server daemon (10.0.0.1:56734). Sep 12 00:18:33.162098 sshd[5444]: Accepted publickey for core from 10.0.0.1 port 56734 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:33.163846 sshd-session[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:33.170212 systemd-logind[1570]: New session 15 of user core. Sep 12 00:18:33.172868 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 00:18:33.336506 sshd[5447]: Connection closed by 10.0.0.1 port 56734 Sep 12 00:18:33.336883 sshd-session[5444]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:33.341957 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:56734.service: Deactivated successfully. Sep 12 00:18:33.344445 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 00:18:33.345410 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Sep 12 00:18:33.346909 systemd-logind[1570]: Removed session 15. Sep 12 00:18:35.682689 containerd[1588]: time="2025-09-12T00:18:35.682636217Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"0d09aa7500a8e4fea29aad014e2746dbbae074f1da18ed33a1ac33742e9d24bf\" pid:5474 exited_at:{seconds:1757636315 nanos:682419559}" Sep 12 00:18:38.354067 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:56748.service - OpenSSH per-connection server daemon (10.0.0.1:56748). Sep 12 00:18:38.412032 sshd[5485]: Accepted publickey for core from 10.0.0.1 port 56748 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:38.414168 sshd-session[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:38.421014 systemd-logind[1570]: New session 16 of user core. Sep 12 00:18:38.423738 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 00:18:38.501773 containerd[1588]: time="2025-09-12T00:18:38.501700899Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" id:\"09ad577793c2b0f650358fc93ad6bb5f37f47d8d9311d3b3459022507aeff329\" pid:5499 exited_at:{seconds:1757636318 nanos:501282765}" Sep 12 00:18:38.553130 kubelet[2728]: I0912 00:18:38.553046 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-n85pl" podStartSLOduration=38.031654049 podStartE2EDuration="55.553020656s" podCreationTimestamp="2025-09-12 00:17:43 +0000 UTC" firstStartedPulling="2025-09-12 00:18:10.759282784 +0000 UTC m=+42.463016754" lastFinishedPulling="2025-09-12 00:18:28.280649392 +0000 UTC m=+59.984383361" observedRunningTime="2025-09-12 00:18:28.566520189 +0000 UTC m=+60.270254158" watchObservedRunningTime="2025-09-12 00:18:38.553020656 +0000 UTC m=+70.256754625" Sep 12 00:18:38.648483 sshd[5505]: Connection closed by 10.0.0.1 port 56748 Sep 12 00:18:38.648843 sshd-session[5485]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:38.652953 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:56748.service: Deactivated successfully. Sep 12 00:18:38.655216 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 00:18:38.656696 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Sep 12 00:18:38.658521 systemd-logind[1570]: Removed session 16. Sep 12 00:18:43.408786 kubelet[2728]: E0912 00:18:43.408740 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:43.666153 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:58608.service - OpenSSH per-connection server daemon (10.0.0.1:58608). Sep 12 00:18:43.709997 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 58608 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:43.711590 sshd-session[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:43.715987 systemd-logind[1570]: New session 17 of user core. Sep 12 00:18:43.732036 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 00:18:43.862337 sshd[5527]: Connection closed by 10.0.0.1 port 58608 Sep 12 00:18:43.862857 sshd-session[5525]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:43.867570 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:58608.service: Deactivated successfully. Sep 12 00:18:43.870240 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 00:18:43.871223 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Sep 12 00:18:43.873439 systemd-logind[1570]: Removed session 17. Sep 12 00:18:48.887081 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:58620.service - OpenSSH per-connection server daemon (10.0.0.1:58620). Sep 12 00:18:48.960927 sshd[5546]: Accepted publickey for core from 10.0.0.1 port 58620 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:48.962780 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:48.968871 systemd-logind[1570]: New session 18 of user core. Sep 12 00:18:48.980867 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 00:18:49.226968 sshd[5548]: Connection closed by 10.0.0.1 port 58620 Sep 12 00:18:49.227222 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:49.235144 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:58620.service: Deactivated successfully. Sep 12 00:18:49.237374 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 00:18:49.238259 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Sep 12 00:18:49.239814 systemd-logind[1570]: Removed session 18. Sep 12 00:18:51.573553 containerd[1588]: time="2025-09-12T00:18:51.573502855Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"f979805c6d156f06181ead04bb75d37fa6205a139869cd537d84f46831806bbc\" pid:5575 exited_at:{seconds:1757636331 nanos:573266533}" Sep 12 00:18:53.408953 kubelet[2728]: E0912 00:18:53.408904 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:54.252501 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:49918.service - OpenSSH per-connection server daemon (10.0.0.1:49918). Sep 12 00:18:54.301972 sshd[5586]: Accepted publickey for core from 10.0.0.1 port 49918 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:54.303494 sshd-session[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:54.308134 systemd-logind[1570]: New session 19 of user core. Sep 12 00:18:54.318873 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 00:18:54.411256 kubelet[2728]: E0912 00:18:54.410884 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:18:54.462511 sshd[5588]: Connection closed by 10.0.0.1 port 49918 Sep 12 00:18:54.462866 sshd-session[5586]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:54.470209 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:49918.service: Deactivated successfully. Sep 12 00:18:54.476970 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 00:18:54.482960 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Sep 12 00:18:54.486471 systemd-logind[1570]: Removed session 19. Sep 12 00:18:57.634878 containerd[1588]: time="2025-09-12T00:18:57.634812245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"fc46f1e795f7e641cf1e17d9d5c9f349f99e6704758f7fc848cc3e74d4626a53\" pid:5612 exited_at:{seconds:1757636337 nanos:634510580}" Sep 12 00:18:59.479984 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:49920.service - OpenSSH per-connection server daemon (10.0.0.1:49920). Sep 12 00:18:59.528697 sshd[5627]: Accepted publickey for core from 10.0.0.1 port 49920 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:18:59.530349 sshd-session[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:18:59.534580 systemd-logind[1570]: New session 20 of user core. Sep 12 00:18:59.541860 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 00:18:59.653214 sshd[5629]: Connection closed by 10.0.0.1 port 49920 Sep 12 00:18:59.653520 sshd-session[5627]: pam_unix(sshd:session): session closed for user core Sep 12 00:18:59.658667 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:49920.service: Deactivated successfully. Sep 12 00:18:59.660850 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 00:18:59.661707 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Sep 12 00:18:59.663113 systemd-logind[1570]: Removed session 20. Sep 12 00:19:01.408810 kubelet[2728]: E0912 00:19:01.408757 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:19:04.669732 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:34296.service - OpenSSH per-connection server daemon (10.0.0.1:34296). Sep 12 00:19:04.718560 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 34296 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:04.720297 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:04.724897 systemd-logind[1570]: New session 21 of user core. Sep 12 00:19:04.736891 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 00:19:04.847916 sshd[5646]: Connection closed by 10.0.0.1 port 34296 Sep 12 00:19:04.849776 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:04.853958 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:34296.service: Deactivated successfully. Sep 12 00:19:04.856509 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 00:19:04.857408 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Sep 12 00:19:04.858844 systemd-logind[1570]: Removed session 21. Sep 12 00:19:08.482704 containerd[1588]: time="2025-09-12T00:19:08.482620988Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" id:\"5f97057d0b1053b66835c3c1f66032b4565f72a50e365d9260470bf311bf0660\" pid:5671 exited_at:{seconds:1757636348 nanos:482288448}" Sep 12 00:19:09.862798 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:34304.service - OpenSSH per-connection server daemon (10.0.0.1:34304). Sep 12 00:19:09.909596 sshd[5685]: Accepted publickey for core from 10.0.0.1 port 34304 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:09.911305 sshd-session[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:09.916582 systemd-logind[1570]: New session 22 of user core. Sep 12 00:19:09.920871 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 00:19:10.036352 sshd[5687]: Connection closed by 10.0.0.1 port 34304 Sep 12 00:19:10.036743 sshd-session[5685]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:10.040842 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:34304.service: Deactivated successfully. Sep 12 00:19:10.043145 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 00:19:10.044000 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Sep 12 00:19:10.045840 systemd-logind[1570]: Removed session 22. Sep 12 00:19:10.220851 containerd[1588]: time="2025-09-12T00:19:10.220791790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"41bcd4dd4e03423bf9fd81cb4c069f9afe1b1281aa9547fe9f44a5eb17df5621\" pid:5712 exited_at:{seconds:1757636350 nanos:220350654}" Sep 12 00:19:13.409072 kubelet[2728]: E0912 00:19:13.408996 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:19:15.061272 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:52248.service - OpenSSH per-connection server daemon (10.0.0.1:52248). Sep 12 00:19:15.121223 sshd[5726]: Accepted publickey for core from 10.0.0.1 port 52248 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:15.122927 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:15.127542 systemd-logind[1570]: New session 23 of user core. Sep 12 00:19:15.144851 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 00:19:15.257498 sshd[5728]: Connection closed by 10.0.0.1 port 52248 Sep 12 00:19:15.257841 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:15.261591 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:52248.service: Deactivated successfully. Sep 12 00:19:15.263536 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 00:19:15.264358 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Sep 12 00:19:15.265579 systemd-logind[1570]: Removed session 23. Sep 12 00:19:20.279004 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:42542.service - OpenSSH per-connection server daemon (10.0.0.1:42542). Sep 12 00:19:20.331505 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 42542 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:20.333280 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:20.338169 systemd-logind[1570]: New session 24 of user core. Sep 12 00:19:20.347922 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 00:19:20.463677 sshd[5743]: Connection closed by 10.0.0.1 port 42542 Sep 12 00:19:20.464097 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:20.468302 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:42542.service: Deactivated successfully. Sep 12 00:19:20.470391 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 00:19:20.471317 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Sep 12 00:19:20.472696 systemd-logind[1570]: Removed session 24. Sep 12 00:19:21.579632 containerd[1588]: time="2025-09-12T00:19:21.579586764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"71f7a2f47027b7cbc676a229eff0904dc13f06ea8eacd099b1737d0aa0b29a94\" pid:5767 exited_at:{seconds:1757636361 nanos:579311122}" Sep 12 00:19:25.480535 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:42556.service - OpenSSH per-connection server daemon (10.0.0.1:42556). Sep 12 00:19:25.530234 sshd[5778]: Accepted publickey for core from 10.0.0.1 port 42556 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:25.531617 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:25.535795 systemd-logind[1570]: New session 25 of user core. Sep 12 00:19:25.555855 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 00:19:25.660588 sshd[5780]: Connection closed by 10.0.0.1 port 42556 Sep 12 00:19:25.660959 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:25.665059 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:42556.service: Deactivated successfully. Sep 12 00:19:25.667254 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 00:19:25.668091 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Sep 12 00:19:25.669332 systemd-logind[1570]: Removed session 25. Sep 12 00:19:27.640405 containerd[1588]: time="2025-09-12T00:19:27.640349813Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"3b68ee22e5892b5f3fc2337a0116e37ba87eb717d80f4cf7073ccab28713b3d7\" pid:5804 exited_at:{seconds:1757636367 nanos:639955117}" Sep 12 00:19:30.408246 kubelet[2728]: E0912 00:19:30.408146 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:19:30.674256 systemd[1]: Started sshd@25-10.0.0.92:22-10.0.0.1:42224.service - OpenSSH per-connection server daemon (10.0.0.1:42224). Sep 12 00:19:30.735526 sshd[5824]: Accepted publickey for core from 10.0.0.1 port 42224 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:30.737346 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:30.742094 systemd-logind[1570]: New session 26 of user core. Sep 12 00:19:30.757858 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 00:19:30.901286 sshd[5826]: Connection closed by 10.0.0.1 port 42224 Sep 12 00:19:30.901608 sshd-session[5824]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:30.906112 systemd[1]: sshd@25-10.0.0.92:22-10.0.0.1:42224.service: Deactivated successfully. Sep 12 00:19:30.908292 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 00:19:30.909068 systemd-logind[1570]: Session 26 logged out. Waiting for processes to exit. Sep 12 00:19:30.910229 systemd-logind[1570]: Removed session 26. Sep 12 00:19:35.680856 containerd[1588]: time="2025-09-12T00:19:35.680805355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"37a0043b1392697f9f7227340a53368fe1959a8ad1c4b2ef376eeafa7d277c07\" pid:5855 exited_at:{seconds:1757636375 nanos:680568107}" Sep 12 00:19:35.914532 systemd[1]: Started sshd@26-10.0.0.92:22-10.0.0.1:42234.service - OpenSSH per-connection server daemon (10.0.0.1:42234). Sep 12 00:19:35.970577 sshd[5866]: Accepted publickey for core from 10.0.0.1 port 42234 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:35.972208 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:35.976665 systemd-logind[1570]: New session 27 of user core. Sep 12 00:19:35.982824 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 00:19:36.098768 sshd[5868]: Connection closed by 10.0.0.1 port 42234 Sep 12 00:19:36.099119 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:36.103313 systemd[1]: sshd@26-10.0.0.92:22-10.0.0.1:42234.service: Deactivated successfully. Sep 12 00:19:36.105456 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 00:19:36.106228 systemd-logind[1570]: Session 27 logged out. Waiting for processes to exit. Sep 12 00:19:36.107595 systemd-logind[1570]: Removed session 27. Sep 12 00:19:38.490036 containerd[1588]: time="2025-09-12T00:19:38.489981266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" id:\"a7e74d8b15b8e38c21d2d7d6312e0f10e228c9fb1b534e4c03f0c12ce9d8ff4b\" pid:5892 exited_at:{seconds:1757636378 nanos:489594845}" Sep 12 00:19:39.408939 kubelet[2728]: E0912 00:19:39.408890 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:19:41.117848 systemd[1]: Started sshd@27-10.0.0.92:22-10.0.0.1:40140.service - OpenSSH per-connection server daemon (10.0.0.1:40140). Sep 12 00:19:41.167614 sshd[5906]: Accepted publickey for core from 10.0.0.1 port 40140 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:41.169289 sshd-session[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:41.174452 systemd-logind[1570]: New session 28 of user core. Sep 12 00:19:41.185961 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 00:19:41.305493 sshd[5909]: Connection closed by 10.0.0.1 port 40140 Sep 12 00:19:41.305880 sshd-session[5906]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:41.310687 systemd[1]: sshd@27-10.0.0.92:22-10.0.0.1:40140.service: Deactivated successfully. Sep 12 00:19:41.312951 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 00:19:41.313850 systemd-logind[1570]: Session 28 logged out. Waiting for processes to exit. Sep 12 00:19:41.315764 systemd-logind[1570]: Removed session 28. Sep 12 00:19:46.330870 systemd[1]: Started sshd@28-10.0.0.92:22-10.0.0.1:40144.service - OpenSSH per-connection server daemon (10.0.0.1:40144). Sep 12 00:19:46.399887 sshd[5942]: Accepted publickey for core from 10.0.0.1 port 40144 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:46.401781 sshd-session[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:46.406597 systemd-logind[1570]: New session 29 of user core. Sep 12 00:19:46.414863 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 00:19:46.586822 sshd[5946]: Connection closed by 10.0.0.1 port 40144 Sep 12 00:19:46.587021 sshd-session[5942]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:46.593155 systemd[1]: sshd@28-10.0.0.92:22-10.0.0.1:40144.service: Deactivated successfully. Sep 12 00:19:46.595443 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 00:19:46.596143 systemd-logind[1570]: Session 29 logged out. Waiting for processes to exit. Sep 12 00:19:46.597707 systemd-logind[1570]: Removed session 29. Sep 12 00:19:51.577426 containerd[1588]: time="2025-09-12T00:19:51.577320252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"16a58fb2962e2eb2c6ae0080de6097b875f8a10317fd1a2708930a9c3d389f56\" pid:5969 exited_at:{seconds:1757636391 nanos:576802896}" Sep 12 00:19:51.600542 systemd[1]: Started sshd@29-10.0.0.92:22-10.0.0.1:57218.service - OpenSSH per-connection server daemon (10.0.0.1:57218). Sep 12 00:19:51.644800 sshd[5980]: Accepted publickey for core from 10.0.0.1 port 57218 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:51.646480 sshd-session[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:51.650902 systemd-logind[1570]: New session 30 of user core. Sep 12 00:19:51.658846 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 12 00:19:51.769679 sshd[5982]: Connection closed by 10.0.0.1 port 57218 Sep 12 00:19:51.770021 sshd-session[5980]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:51.774198 systemd[1]: sshd@29-10.0.0.92:22-10.0.0.1:57218.service: Deactivated successfully. Sep 12 00:19:51.776190 systemd[1]: session-30.scope: Deactivated successfully. Sep 12 00:19:51.777105 systemd-logind[1570]: Session 30 logged out. Waiting for processes to exit. Sep 12 00:19:51.778473 systemd-logind[1570]: Removed session 30. Sep 12 00:19:56.788509 systemd[1]: Started sshd@30-10.0.0.92:22-10.0.0.1:57234.service - OpenSSH per-connection server daemon (10.0.0.1:57234). Sep 12 00:19:56.846836 sshd[5996]: Accepted publickey for core from 10.0.0.1 port 57234 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:19:56.848544 sshd-session[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:19:56.853660 systemd-logind[1570]: New session 31 of user core. Sep 12 00:19:56.865881 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 12 00:19:57.209855 sshd[5998]: Connection closed by 10.0.0.1 port 57234 Sep 12 00:19:57.210271 sshd-session[5996]: pam_unix(sshd:session): session closed for user core Sep 12 00:19:57.213474 systemd[1]: sshd@30-10.0.0.92:22-10.0.0.1:57234.service: Deactivated successfully. Sep 12 00:19:57.216109 systemd[1]: session-31.scope: Deactivated successfully. Sep 12 00:19:57.217926 systemd-logind[1570]: Session 31 logged out. Waiting for processes to exit. Sep 12 00:19:57.219334 systemd-logind[1570]: Removed session 31. Sep 12 00:19:57.642024 containerd[1588]: time="2025-09-12T00:19:57.641974516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"91839de2975c4b786a76f85aa09a0c9e9d679b9e5c1baca59cf64fe784b0ed94\" pid:6021 exited_at:{seconds:1757636397 nanos:641639383}" Sep 12 00:20:02.012223 systemd[1]: Started sshd@31-10.0.0.92:22-10.0.0.1:50430.service - OpenSSH per-connection server daemon (10.0.0.1:50430). Sep 12 00:20:02.061972 sshd[6034]: Accepted publickey for core from 10.0.0.1 port 50430 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:02.063787 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:02.068159 systemd-logind[1570]: New session 32 of user core. Sep 12 00:20:02.078881 systemd[1]: Started session-32.scope - Session 32 of User core. Sep 12 00:20:02.190345 sshd[6036]: Connection closed by 10.0.0.1 port 50430 Sep 12 00:20:02.190652 sshd-session[6034]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:02.194572 systemd[1]: sshd@31-10.0.0.92:22-10.0.0.1:50430.service: Deactivated successfully. Sep 12 00:20:02.196555 systemd[1]: session-32.scope: Deactivated successfully. Sep 12 00:20:02.197318 systemd-logind[1570]: Session 32 logged out. Waiting for processes to exit. Sep 12 00:20:02.198820 systemd-logind[1570]: Removed session 32. Sep 12 00:20:03.408429 kubelet[2728]: E0912 00:20:03.408364 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:06.411498 kubelet[2728]: E0912 00:20:06.411449 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:07.216197 systemd[1]: Started sshd@32-10.0.0.92:22-10.0.0.1:50436.service - OpenSSH per-connection server daemon (10.0.0.1:50436). Sep 12 00:20:07.272253 sshd[6051]: Accepted publickey for core from 10.0.0.1 port 50436 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:07.273799 sshd-session[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:07.278498 systemd-logind[1570]: New session 33 of user core. Sep 12 00:20:07.289849 systemd[1]: Started session-33.scope - Session 33 of User core. Sep 12 00:20:07.413351 sshd[6053]: Connection closed by 10.0.0.1 port 50436 Sep 12 00:20:07.413703 sshd-session[6051]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:07.418331 systemd[1]: sshd@32-10.0.0.92:22-10.0.0.1:50436.service: Deactivated successfully. Sep 12 00:20:07.420587 systemd[1]: session-33.scope: Deactivated successfully. Sep 12 00:20:07.421485 systemd-logind[1570]: Session 33 logged out. Waiting for processes to exit. Sep 12 00:20:07.423661 systemd-logind[1570]: Removed session 33. Sep 12 00:20:08.488133 containerd[1588]: time="2025-09-12T00:20:08.488077504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" id:\"5d215dee1de47b2e91bd8f05334a25f512b03d635f0650c936ddb6caf98fff06\" pid:6077 exited_at:{seconds:1757636408 nanos:487722936}" Sep 12 00:20:10.226743 containerd[1588]: time="2025-09-12T00:20:10.226661424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"f0b41b4a4cb6db0a5849a767e4e4774998df30a095fed5c2f6b978d13dc75d45\" pid:6103 exited_at:{seconds:1757636410 nanos:226223929}" Sep 12 00:20:12.427957 systemd[1]: Started sshd@33-10.0.0.92:22-10.0.0.1:41466.service - OpenSSH per-connection server daemon (10.0.0.1:41466). Sep 12 00:20:12.498174 sshd[6115]: Accepted publickey for core from 10.0.0.1 port 41466 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:12.499706 sshd-session[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:12.504287 systemd-logind[1570]: New session 34 of user core. Sep 12 00:20:12.511861 systemd[1]: Started session-34.scope - Session 34 of User core. Sep 12 00:20:12.640680 sshd[6117]: Connection closed by 10.0.0.1 port 41466 Sep 12 00:20:12.641033 sshd-session[6115]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:12.645033 systemd[1]: sshd@33-10.0.0.92:22-10.0.0.1:41466.service: Deactivated successfully. Sep 12 00:20:12.647090 systemd[1]: session-34.scope: Deactivated successfully. Sep 12 00:20:12.648052 systemd-logind[1570]: Session 34 logged out. Waiting for processes to exit. Sep 12 00:20:12.649413 systemd-logind[1570]: Removed session 34. Sep 12 00:20:17.408423 kubelet[2728]: E0912 00:20:17.408371 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:17.657449 systemd[1]: Started sshd@34-10.0.0.92:22-10.0.0.1:41482.service - OpenSSH per-connection server daemon (10.0.0.1:41482). Sep 12 00:20:17.706156 sshd[6131]: Accepted publickey for core from 10.0.0.1 port 41482 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:17.707656 sshd-session[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:17.712147 systemd-logind[1570]: New session 35 of user core. Sep 12 00:20:17.720928 systemd[1]: Started session-35.scope - Session 35 of User core. Sep 12 00:20:17.829051 sshd[6133]: Connection closed by 10.0.0.1 port 41482 Sep 12 00:20:17.829362 sshd-session[6131]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:17.833473 systemd[1]: sshd@34-10.0.0.92:22-10.0.0.1:41482.service: Deactivated successfully. Sep 12 00:20:17.835507 systemd[1]: session-35.scope: Deactivated successfully. Sep 12 00:20:17.836438 systemd-logind[1570]: Session 35 logged out. Waiting for processes to exit. Sep 12 00:20:17.837563 systemd-logind[1570]: Removed session 35. Sep 12 00:20:19.408071 kubelet[2728]: E0912 00:20:19.408022 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:21.591637 containerd[1588]: time="2025-09-12T00:20:21.591576965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"3387e061cf6f414172c580618fc9eee8bd26082e83d22b2e07a9f23776213375\" pid:6159 exited_at:{seconds:1757636421 nanos:591110737}" Sep 12 00:20:22.846035 systemd[1]: Started sshd@35-10.0.0.92:22-10.0.0.1:32848.service - OpenSSH per-connection server daemon (10.0.0.1:32848). Sep 12 00:20:22.898493 sshd[6170]: Accepted publickey for core from 10.0.0.1 port 32848 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:22.900400 sshd-session[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:22.905095 systemd-logind[1570]: New session 36 of user core. Sep 12 00:20:22.916924 systemd[1]: Started session-36.scope - Session 36 of User core. Sep 12 00:20:23.028693 sshd[6172]: Connection closed by 10.0.0.1 port 32848 Sep 12 00:20:23.029008 sshd-session[6170]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:23.032618 systemd[1]: sshd@35-10.0.0.92:22-10.0.0.1:32848.service: Deactivated successfully. Sep 12 00:20:23.034589 systemd[1]: session-36.scope: Deactivated successfully. Sep 12 00:20:23.035387 systemd-logind[1570]: Session 36 logged out. Waiting for processes to exit. Sep 12 00:20:23.036564 systemd-logind[1570]: Removed session 36. Sep 12 00:20:23.408411 kubelet[2728]: E0912 00:20:23.408364 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:27.630451 containerd[1588]: time="2025-09-12T00:20:27.630403151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"43ad230af03393656c01de7ea2bc89db7dd4beb968dc4652bbd319db0cf5fe66\" pid:6196 exited_at:{seconds:1757636427 nanos:630066947}" Sep 12 00:20:28.042810 systemd[1]: Started sshd@36-10.0.0.92:22-10.0.0.1:32852.service - OpenSSH per-connection server daemon (10.0.0.1:32852). Sep 12 00:20:28.098693 sshd[6208]: Accepted publickey for core from 10.0.0.1 port 32852 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:28.100178 sshd-session[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:28.105023 systemd-logind[1570]: New session 37 of user core. Sep 12 00:20:28.112849 systemd[1]: Started session-37.scope - Session 37 of User core. Sep 12 00:20:28.262178 sshd[6210]: Connection closed by 10.0.0.1 port 32852 Sep 12 00:20:28.262505 sshd-session[6208]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:28.266905 systemd[1]: sshd@36-10.0.0.92:22-10.0.0.1:32852.service: Deactivated successfully. Sep 12 00:20:28.268857 systemd[1]: session-37.scope: Deactivated successfully. Sep 12 00:20:28.269615 systemd-logind[1570]: Session 37 logged out. Waiting for processes to exit. Sep 12 00:20:28.270913 systemd-logind[1570]: Removed session 37. Sep 12 00:20:33.276324 systemd[1]: Started sshd@37-10.0.0.92:22-10.0.0.1:32850.service - OpenSSH per-connection server daemon (10.0.0.1:32850). Sep 12 00:20:33.334740 sshd[6225]: Accepted publickey for core from 10.0.0.1 port 32850 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:33.336365 sshd-session[6225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:33.340985 systemd-logind[1570]: New session 38 of user core. Sep 12 00:20:33.350843 systemd[1]: Started session-38.scope - Session 38 of User core. Sep 12 00:20:33.466749 sshd[6227]: Connection closed by 10.0.0.1 port 32850 Sep 12 00:20:33.467078 sshd-session[6225]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:33.470940 systemd[1]: sshd@37-10.0.0.92:22-10.0.0.1:32850.service: Deactivated successfully. Sep 12 00:20:33.473037 systemd[1]: session-38.scope: Deactivated successfully. Sep 12 00:20:33.473915 systemd-logind[1570]: Session 38 logged out. Waiting for processes to exit. Sep 12 00:20:33.475013 systemd-logind[1570]: Removed session 38. Sep 12 00:20:35.688603 containerd[1588]: time="2025-09-12T00:20:35.688542602Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"e3e94dc40262ed21ae5f7404804498d29ef4792b5404286d5f5f16acb5ee72e4\" pid:6253 exited_at:{seconds:1757636435 nanos:688255578}" Sep 12 00:20:37.428843 kubelet[2728]: E0912 00:20:37.428782 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:38.483057 systemd[1]: Started sshd@38-10.0.0.92:22-10.0.0.1:32864.service - OpenSSH per-connection server daemon (10.0.0.1:32864). Sep 12 00:20:38.504191 containerd[1588]: time="2025-09-12T00:20:38.504128734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" id:\"b1c26e1f86539f8ba14786f4be7328edf7d17632ba149124ac8c33d8d9d4c35b\" pid:6276 exited_at:{seconds:1757636438 nanos:503579354}" Sep 12 00:20:38.536845 sshd[6289]: Accepted publickey for core from 10.0.0.1 port 32864 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:38.538574 sshd-session[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:38.543439 systemd-logind[1570]: New session 39 of user core. Sep 12 00:20:38.551868 systemd[1]: Started session-39.scope - Session 39 of User core. Sep 12 00:20:38.673432 sshd[6293]: Connection closed by 10.0.0.1 port 32864 Sep 12 00:20:38.673832 sshd-session[6289]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:38.678447 systemd[1]: sshd@38-10.0.0.92:22-10.0.0.1:32864.service: Deactivated successfully. Sep 12 00:20:38.680763 systemd[1]: session-39.scope: Deactivated successfully. Sep 12 00:20:38.684021 systemd-logind[1570]: Session 39 logged out. Waiting for processes to exit. Sep 12 00:20:38.685619 systemd-logind[1570]: Removed session 39. Sep 12 00:20:43.689927 systemd[1]: Started sshd@39-10.0.0.92:22-10.0.0.1:32800.service - OpenSSH per-connection server daemon (10.0.0.1:32800). Sep 12 00:20:43.742060 sshd[6307]: Accepted publickey for core from 10.0.0.1 port 32800 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:43.744014 sshd-session[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:43.749418 systemd-logind[1570]: New session 40 of user core. Sep 12 00:20:43.765009 systemd[1]: Started session-40.scope - Session 40 of User core. Sep 12 00:20:43.886511 sshd[6309]: Connection closed by 10.0.0.1 port 32800 Sep 12 00:20:43.886891 sshd-session[6307]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:43.891796 systemd[1]: sshd@39-10.0.0.92:22-10.0.0.1:32800.service: Deactivated successfully. Sep 12 00:20:43.894017 systemd[1]: session-40.scope: Deactivated successfully. Sep 12 00:20:43.894902 systemd-logind[1570]: Session 40 logged out. Waiting for processes to exit. Sep 12 00:20:43.896659 systemd-logind[1570]: Removed session 40. Sep 12 00:20:48.900282 systemd[1]: Started sshd@40-10.0.0.92:22-10.0.0.1:32806.service - OpenSSH per-connection server daemon (10.0.0.1:32806). Sep 12 00:20:48.955196 sshd[6329]: Accepted publickey for core from 10.0.0.1 port 32806 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:48.956799 sshd-session[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:48.961092 systemd-logind[1570]: New session 41 of user core. Sep 12 00:20:48.968868 systemd[1]: Started session-41.scope - Session 41 of User core. Sep 12 00:20:49.155958 sshd[6331]: Connection closed by 10.0.0.1 port 32806 Sep 12 00:20:49.157003 sshd-session[6329]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:49.162500 systemd[1]: sshd@40-10.0.0.92:22-10.0.0.1:32806.service: Deactivated successfully. Sep 12 00:20:49.165096 systemd[1]: session-41.scope: Deactivated successfully. Sep 12 00:20:49.166287 systemd-logind[1570]: Session 41 logged out. Waiting for processes to exit. Sep 12 00:20:49.168142 systemd-logind[1570]: Removed session 41. Sep 12 00:20:51.590213 containerd[1588]: time="2025-09-12T00:20:51.590154357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"76c3fa6a5020bdb18acd2b42a88ccf0e3b9679f2dd55b0bc1269512d6e4d530c\" pid:6357 exited_at:{seconds:1757636451 nanos:589661059}" Sep 12 00:20:54.169886 systemd[1]: Started sshd@41-10.0.0.92:22-10.0.0.1:50890.service - OpenSSH per-connection server daemon (10.0.0.1:50890). Sep 12 00:20:54.226024 sshd[6368]: Accepted publickey for core from 10.0.0.1 port 50890 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:54.227910 sshd-session[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:54.233040 systemd-logind[1570]: New session 42 of user core. Sep 12 00:20:54.242846 systemd[1]: Started session-42.scope - Session 42 of User core. Sep 12 00:20:54.358876 sshd[6370]: Connection closed by 10.0.0.1 port 50890 Sep 12 00:20:54.359317 sshd-session[6368]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:54.365154 systemd[1]: sshd@41-10.0.0.92:22-10.0.0.1:50890.service: Deactivated successfully. Sep 12 00:20:54.367678 systemd[1]: session-42.scope: Deactivated successfully. Sep 12 00:20:54.368580 systemd-logind[1570]: Session 42 logged out. Waiting for processes to exit. Sep 12 00:20:54.370396 systemd-logind[1570]: Removed session 42. Sep 12 00:20:54.409176 kubelet[2728]: E0912 00:20:54.409079 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:20:57.626575 containerd[1588]: time="2025-09-12T00:20:57.626529780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"11085332ff96868a5824e0a5f53f1f42c838fbd81c0f30c96130e954d5f78490\" pid:6396 exited_at:{seconds:1757636457 nanos:626253248}" Sep 12 00:20:59.378257 systemd[1]: Started sshd@42-10.0.0.92:22-10.0.0.1:50894.service - OpenSSH per-connection server daemon (10.0.0.1:50894). Sep 12 00:20:59.433687 sshd[6409]: Accepted publickey for core from 10.0.0.1 port 50894 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:20:59.435178 sshd-session[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:20:59.440230 systemd-logind[1570]: New session 43 of user core. Sep 12 00:20:59.447860 systemd[1]: Started session-43.scope - Session 43 of User core. Sep 12 00:20:59.571005 sshd[6411]: Connection closed by 10.0.0.1 port 50894 Sep 12 00:20:59.571445 sshd-session[6409]: pam_unix(sshd:session): session closed for user core Sep 12 00:20:59.577825 systemd[1]: sshd@42-10.0.0.92:22-10.0.0.1:50894.service: Deactivated successfully. Sep 12 00:20:59.580697 systemd[1]: session-43.scope: Deactivated successfully. Sep 12 00:20:59.581831 systemd-logind[1570]: Session 43 logged out. Waiting for processes to exit. Sep 12 00:20:59.584224 systemd-logind[1570]: Removed session 43. Sep 12 00:21:04.583876 systemd[1]: Started sshd@43-10.0.0.92:22-10.0.0.1:49836.service - OpenSSH per-connection server daemon (10.0.0.1:49836). Sep 12 00:21:04.637119 sshd[6426]: Accepted publickey for core from 10.0.0.1 port 49836 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:04.638467 sshd-session[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:04.642930 systemd-logind[1570]: New session 44 of user core. Sep 12 00:21:04.650845 systemd[1]: Started session-44.scope - Session 44 of User core. Sep 12 00:21:04.757588 sshd[6428]: Connection closed by 10.0.0.1 port 49836 Sep 12 00:21:04.758063 sshd-session[6426]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:04.761874 systemd[1]: sshd@43-10.0.0.92:22-10.0.0.1:49836.service: Deactivated successfully. Sep 12 00:21:04.764184 systemd[1]: session-44.scope: Deactivated successfully. Sep 12 00:21:04.766055 systemd-logind[1570]: Session 44 logged out. Waiting for processes to exit. Sep 12 00:21:04.767644 systemd-logind[1570]: Removed session 44. Sep 12 00:21:08.534027 containerd[1588]: time="2025-09-12T00:21:08.533977323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" id:\"ab1b1361bc5dcd29fc9cb9fb321f60479f6e75d52d1bcde081d121775b8a5247\" pid:6451 exited_at:{seconds:1757636468 nanos:533615340}" Sep 12 00:21:09.779056 systemd[1]: Started sshd@44-10.0.0.92:22-10.0.0.1:49852.service - OpenSSH per-connection server daemon (10.0.0.1:49852). Sep 12 00:21:09.842947 sshd[6464]: Accepted publickey for core from 10.0.0.1 port 49852 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:09.844446 sshd-session[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:09.848828 systemd-logind[1570]: New session 45 of user core. Sep 12 00:21:09.858959 systemd[1]: Started session-45.scope - Session 45 of User core. Sep 12 00:21:10.007193 sshd[6466]: Connection closed by 10.0.0.1 port 49852 Sep 12 00:21:10.007544 sshd-session[6464]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:10.012492 systemd[1]: sshd@44-10.0.0.92:22-10.0.0.1:49852.service: Deactivated successfully. Sep 12 00:21:10.014747 systemd[1]: session-45.scope: Deactivated successfully. Sep 12 00:21:10.015733 systemd-logind[1570]: Session 45 logged out. Waiting for processes to exit. Sep 12 00:21:10.017750 systemd-logind[1570]: Removed session 45. Sep 12 00:21:10.220707 containerd[1588]: time="2025-09-12T00:21:10.220659889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"062acd63af6ae7455ecab896d112b9d54c2c3dc12ddb6ac560e1c4dc2a8f8aca\" pid:6491 exited_at:{seconds:1757636470 nanos:220260586}" Sep 12 00:21:10.408511 kubelet[2728]: E0912 00:21:10.408468 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:15.023385 systemd[1]: Started sshd@45-10.0.0.92:22-10.0.0.1:44240.service - OpenSSH per-connection server daemon (10.0.0.1:44240). Sep 12 00:21:15.069737 sshd[6508]: Accepted publickey for core from 10.0.0.1 port 44240 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:15.071173 sshd-session[6508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:15.075702 systemd-logind[1570]: New session 46 of user core. Sep 12 00:21:15.086853 systemd[1]: Started session-46.scope - Session 46 of User core. Sep 12 00:21:15.200367 sshd[6510]: Connection closed by 10.0.0.1 port 44240 Sep 12 00:21:15.200691 sshd-session[6508]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:15.204621 systemd[1]: sshd@45-10.0.0.92:22-10.0.0.1:44240.service: Deactivated successfully. Sep 12 00:21:15.206815 systemd[1]: session-46.scope: Deactivated successfully. Sep 12 00:21:15.207793 systemd-logind[1570]: Session 46 logged out. Waiting for processes to exit. Sep 12 00:21:15.209082 systemd-logind[1570]: Removed session 46. Sep 12 00:21:20.213787 systemd[1]: Started sshd@46-10.0.0.92:22-10.0.0.1:42306.service - OpenSSH per-connection server daemon (10.0.0.1:42306). Sep 12 00:21:20.275382 sshd[6544]: Accepted publickey for core from 10.0.0.1 port 42306 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:20.276834 sshd-session[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:20.281284 systemd-logind[1570]: New session 47 of user core. Sep 12 00:21:20.290917 systemd[1]: Started session-47.scope - Session 47 of User core. Sep 12 00:21:20.403961 sshd[6546]: Connection closed by 10.0.0.1 port 42306 Sep 12 00:21:20.404359 sshd-session[6544]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:20.414845 systemd[1]: sshd@46-10.0.0.92:22-10.0.0.1:42306.service: Deactivated successfully. Sep 12 00:21:20.416973 systemd[1]: session-47.scope: Deactivated successfully. Sep 12 00:21:20.417828 systemd-logind[1570]: Session 47 logged out. Waiting for processes to exit. Sep 12 00:21:20.421756 systemd[1]: Started sshd@47-10.0.0.92:22-10.0.0.1:42318.service - OpenSSH per-connection server daemon (10.0.0.1:42318). Sep 12 00:21:20.422374 systemd-logind[1570]: Removed session 47. Sep 12 00:21:20.478573 sshd[6559]: Accepted publickey for core from 10.0.0.1 port 42318 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:20.480224 sshd-session[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:20.484774 systemd-logind[1570]: New session 48 of user core. Sep 12 00:21:20.496832 systemd[1]: Started session-48.scope - Session 48 of User core. Sep 12 00:21:20.815518 sshd[6561]: Connection closed by 10.0.0.1 port 42318 Sep 12 00:21:20.817600 sshd-session[6559]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:20.828658 systemd[1]: sshd@47-10.0.0.92:22-10.0.0.1:42318.service: Deactivated successfully. Sep 12 00:21:20.831348 systemd[1]: session-48.scope: Deactivated successfully. Sep 12 00:21:20.832300 systemd-logind[1570]: Session 48 logged out. Waiting for processes to exit. Sep 12 00:21:20.836231 systemd[1]: Started sshd@48-10.0.0.92:22-10.0.0.1:42328.service - OpenSSH per-connection server daemon (10.0.0.1:42328). Sep 12 00:21:20.837195 systemd-logind[1570]: Removed session 48. Sep 12 00:21:20.904558 sshd[6572]: Accepted publickey for core from 10.0.0.1 port 42328 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:20.906291 sshd-session[6572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:20.911106 systemd-logind[1570]: New session 49 of user core. Sep 12 00:21:20.917890 systemd[1]: Started session-49.scope - Session 49 of User core. Sep 12 00:21:21.408889 kubelet[2728]: E0912 00:21:21.408846 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:21.455147 sshd[6574]: Connection closed by 10.0.0.1 port 42328 Sep 12 00:21:21.455529 sshd-session[6572]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:21.466637 systemd[1]: sshd@48-10.0.0.92:22-10.0.0.1:42328.service: Deactivated successfully. Sep 12 00:21:21.468853 systemd[1]: session-49.scope: Deactivated successfully. Sep 12 00:21:21.470022 systemd-logind[1570]: Session 49 logged out. Waiting for processes to exit. Sep 12 00:21:21.474397 systemd[1]: Started sshd@49-10.0.0.92:22-10.0.0.1:42342.service - OpenSSH per-connection server daemon (10.0.0.1:42342). Sep 12 00:21:21.475629 systemd-logind[1570]: Removed session 49. Sep 12 00:21:21.529493 sshd[6596]: Accepted publickey for core from 10.0.0.1 port 42342 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:21.530962 sshd-session[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:21.535821 systemd-logind[1570]: New session 50 of user core. Sep 12 00:21:21.545832 systemd[1]: Started session-50.scope - Session 50 of User core. Sep 12 00:21:21.623174 containerd[1588]: time="2025-09-12T00:21:21.623123881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"555ff1775147bc0418015e9693d8d0862951cf4fc1077b1a22d9b5e083f933ec\" pid:6611 exited_at:{seconds:1757636481 nanos:622895465}" Sep 12 00:21:21.911545 sshd[6598]: Connection closed by 10.0.0.1 port 42342 Sep 12 00:21:21.912110 sshd-session[6596]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:21.924472 systemd[1]: sshd@49-10.0.0.92:22-10.0.0.1:42342.service: Deactivated successfully. Sep 12 00:21:21.927268 systemd[1]: session-50.scope: Deactivated successfully. Sep 12 00:21:21.928333 systemd-logind[1570]: Session 50 logged out. Waiting for processes to exit. Sep 12 00:21:21.936883 systemd[1]: Started sshd@50-10.0.0.92:22-10.0.0.1:42354.service - OpenSSH per-connection server daemon (10.0.0.1:42354). Sep 12 00:21:21.937798 systemd-logind[1570]: Removed session 50. Sep 12 00:21:21.989359 sshd[6630]: Accepted publickey for core from 10.0.0.1 port 42354 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:21.991083 sshd-session[6630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:21.995892 systemd-logind[1570]: New session 51 of user core. Sep 12 00:21:22.012102 systemd[1]: Started session-51.scope - Session 51 of User core. Sep 12 00:21:22.324619 sshd[6632]: Connection closed by 10.0.0.1 port 42354 Sep 12 00:21:22.324869 sshd-session[6630]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:22.328112 systemd[1]: sshd@50-10.0.0.92:22-10.0.0.1:42354.service: Deactivated successfully. Sep 12 00:21:22.330402 systemd[1]: session-51.scope: Deactivated successfully. Sep 12 00:21:22.331985 systemd-logind[1570]: Session 51 logged out. Waiting for processes to exit. Sep 12 00:21:22.333899 systemd-logind[1570]: Removed session 51. Sep 12 00:21:27.337522 systemd[1]: Started sshd@51-10.0.0.92:22-10.0.0.1:42368.service - OpenSSH per-connection server daemon (10.0.0.1:42368). Sep 12 00:21:27.376954 sshd[6648]: Accepted publickey for core from 10.0.0.1 port 42368 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:27.378873 sshd-session[6648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:27.383176 systemd-logind[1570]: New session 52 of user core. Sep 12 00:21:27.393856 systemd[1]: Started session-52.scope - Session 52 of User core. Sep 12 00:21:27.566290 sshd[6650]: Connection closed by 10.0.0.1 port 42368 Sep 12 00:21:27.567044 sshd-session[6648]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:27.572834 systemd[1]: sshd@51-10.0.0.92:22-10.0.0.1:42368.service: Deactivated successfully. Sep 12 00:21:27.579215 systemd[1]: session-52.scope: Deactivated successfully. Sep 12 00:21:27.580496 systemd-logind[1570]: Session 52 logged out. Waiting for processes to exit. Sep 12 00:21:27.583240 systemd-logind[1570]: Removed session 52. Sep 12 00:21:27.648685 containerd[1588]: time="2025-09-12T00:21:27.648621887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e77c2db2f97c9d515122f6effb58addd8500501585efee73de604a843c53768\" id:\"9b99327a70a71654c32557bc69cd01319dd5758fa287b74a71dbe5d93ed21c80\" pid:6672 exited_at:{seconds:1757636487 nanos:648210502}" Sep 12 00:21:30.407977 kubelet[2728]: E0912 00:21:30.407937 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:32.580892 systemd[1]: Started sshd@52-10.0.0.92:22-10.0.0.1:36346.service - OpenSSH per-connection server daemon (10.0.0.1:36346). Sep 12 00:21:32.641352 sshd[6689]: Accepted publickey for core from 10.0.0.1 port 36346 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:32.643085 sshd-session[6689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:32.648250 systemd-logind[1570]: New session 53 of user core. Sep 12 00:21:32.660911 systemd[1]: Started session-53.scope - Session 53 of User core. Sep 12 00:21:32.798153 sshd[6691]: Connection closed by 10.0.0.1 port 36346 Sep 12 00:21:32.798497 sshd-session[6689]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:32.803512 systemd[1]: sshd@52-10.0.0.92:22-10.0.0.1:36346.service: Deactivated successfully. Sep 12 00:21:32.805643 systemd[1]: session-53.scope: Deactivated successfully. Sep 12 00:21:32.806642 systemd-logind[1570]: Session 53 logged out. Waiting for processes to exit. Sep 12 00:21:32.808169 systemd-logind[1570]: Removed session 53. Sep 12 00:21:35.675931 containerd[1588]: time="2025-09-12T00:21:35.675881124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0c2205f8d13bd6fb6764e7a7f9f5a65d6979c6fcd78eb86f05a3ff30bb5fc6d\" id:\"17591879802a914d4fb0dd686d15db79aa5a016d07c4660a978d7c08557dcd1b\" pid:6717 exited_at:{seconds:1757636495 nanos:675623504}" Sep 12 00:21:37.815055 systemd[1]: Started sshd@53-10.0.0.92:22-10.0.0.1:36354.service - OpenSSH per-connection server daemon (10.0.0.1:36354). Sep 12 00:21:37.863904 sshd[6728]: Accepted publickey for core from 10.0.0.1 port 36354 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:37.866367 sshd-session[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:37.871810 systemd-logind[1570]: New session 54 of user core. Sep 12 00:21:37.880994 systemd[1]: Started session-54.scope - Session 54 of User core. Sep 12 00:21:37.995671 sshd[6730]: Connection closed by 10.0.0.1 port 36354 Sep 12 00:21:37.996027 sshd-session[6728]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:38.001398 systemd[1]: sshd@53-10.0.0.92:22-10.0.0.1:36354.service: Deactivated successfully. Sep 12 00:21:38.003505 systemd[1]: session-54.scope: Deactivated successfully. Sep 12 00:21:38.004260 systemd-logind[1570]: Session 54 logged out. Waiting for processes to exit. Sep 12 00:21:38.005483 systemd-logind[1570]: Removed session 54. Sep 12 00:21:38.413455 kubelet[2728]: E0912 00:21:38.413417 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:38.488873 containerd[1588]: time="2025-09-12T00:21:38.488824997Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d56abd8de7058402ca58dd95fa1e3ff27070645c66ab120a440f99c43d3174\" id:\"e12b2a36b3dbd8f4753d8cc358dd73e4e163bc1909da79057e22fa948533974c\" pid:6755 exited_at:{seconds:1757636498 nanos:488418264}" Sep 12 00:21:40.408430 kubelet[2728]: E0912 00:21:40.408326 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 00:21:43.012420 systemd[1]: Started sshd@54-10.0.0.92:22-10.0.0.1:55258.service - OpenSSH per-connection server daemon (10.0.0.1:55258). Sep 12 00:21:43.085106 sshd[6768]: Accepted publickey for core from 10.0.0.1 port 55258 ssh2: RSA SHA256:A0suq1pUL9tS3FfPj5cV4y8UVFHgmV6pScsqSKyxj00 Sep 12 00:21:43.086623 sshd-session[6768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 00:21:43.090734 systemd-logind[1570]: New session 55 of user core. Sep 12 00:21:43.101838 systemd[1]: Started session-55.scope - Session 55 of User core. Sep 12 00:21:43.249975 sshd[6770]: Connection closed by 10.0.0.1 port 55258 Sep 12 00:21:43.250323 sshd-session[6768]: pam_unix(sshd:session): session closed for user core Sep 12 00:21:43.253988 systemd[1]: sshd@54-10.0.0.92:22-10.0.0.1:55258.service: Deactivated successfully. Sep 12 00:21:43.255854 systemd[1]: session-55.scope: Deactivated successfully. Sep 12 00:21:43.256516 systemd-logind[1570]: Session 55 logged out. Waiting for processes to exit. Sep 12 00:21:43.257700 systemd-logind[1570]: Removed session 55. Sep 12 00:21:44.408116 kubelet[2728]: E0912 00:21:44.408055 2728 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"