Sep 13 00:23:29.805239 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:15:39 -00 2025 Sep 13 00:23:29.805265 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:23:29.805274 kernel: BIOS-provided physical RAM map: Sep 13 00:23:29.805281 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:23:29.805287 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:23:29.805293 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:23:29.805301 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 13 00:23:29.805310 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 13 00:23:29.805320 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 00:23:29.805326 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 00:23:29.805333 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:23:29.805339 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:23:29.805345 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 13 00:23:29.805352 kernel: NX (Execute Disable) protection: active Sep 13 00:23:29.805362 kernel: APIC: Static calls initialized Sep 13 00:23:29.805370 kernel: SMBIOS 2.8 present. Sep 13 00:23:29.805380 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 13 00:23:29.805387 kernel: DMI: Memory slots populated: 1/1 Sep 13 00:23:29.805394 kernel: Hypervisor detected: KVM Sep 13 00:23:29.805401 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:23:29.805408 kernel: kvm-clock: using sched offset of 4378926811 cycles Sep 13 00:23:29.805415 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:23:29.805423 kernel: tsc: Detected 2794.748 MHz processor Sep 13 00:23:29.805433 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:23:29.805441 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:23:29.805448 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 13 00:23:29.805455 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:23:29.805463 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:23:29.805470 kernel: Using GB pages for direct mapping Sep 13 00:23:29.805477 kernel: ACPI: Early table checksum verification disabled Sep 13 00:23:29.805484 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 13 00:23:29.805492 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:29.805501 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:29.805509 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:29.805516 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 13 00:23:29.805523 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:29.805530 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:29.805537 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:29.805545 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:23:29.805552 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 13 00:23:29.805565 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 13 00:23:29.805572 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 13 00:23:29.805580 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 13 00:23:29.805587 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 13 00:23:29.805594 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 13 00:23:29.805602 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 13 00:23:29.805612 kernel: No NUMA configuration found Sep 13 00:23:29.805619 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 13 00:23:29.805627 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Sep 13 00:23:29.805634 kernel: Zone ranges: Sep 13 00:23:29.805642 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:23:29.805649 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 13 00:23:29.805657 kernel: Normal empty Sep 13 00:23:29.805664 kernel: Device empty Sep 13 00:23:29.805671 kernel: Movable zone start for each node Sep 13 00:23:29.805679 kernel: Early memory node ranges Sep 13 00:23:29.805689 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:23:29.805696 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 13 00:23:29.805704 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 13 00:23:29.805711 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:23:29.805719 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:23:29.805726 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 13 00:23:29.805733 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:23:29.805743 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:23:29.805751 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:23:29.805760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:23:29.805768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:23:29.805778 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:23:29.805785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:23:29.805793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:23:29.805801 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:23:29.805810 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:23:29.805819 kernel: TSC deadline timer available Sep 13 00:23:29.805826 kernel: CPU topo: Max. logical packages: 1 Sep 13 00:23:29.805839 kernel: CPU topo: Max. logical dies: 1 Sep 13 00:23:29.805846 kernel: CPU topo: Max. dies per package: 1 Sep 13 00:23:29.805855 kernel: CPU topo: Max. threads per core: 1 Sep 13 00:23:29.805863 kernel: CPU topo: Num. cores per package: 4 Sep 13 00:23:29.805871 kernel: CPU topo: Num. threads per package: 4 Sep 13 00:23:29.805878 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 13 00:23:29.805886 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:23:29.805893 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 13 00:23:29.805901 kernel: kvm-guest: setup PV sched yield Sep 13 00:23:29.805908 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 00:23:29.805918 kernel: Booting paravirtualized kernel on KVM Sep 13 00:23:29.805926 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:23:29.805934 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 13 00:23:29.805941 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 13 00:23:29.805949 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 13 00:23:29.805956 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 13 00:23:29.805964 kernel: kvm-guest: PV spinlocks enabled Sep 13 00:23:29.805971 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 00:23:29.805980 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:23:29.805991 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:23:29.805998 kernel: random: crng init done Sep 13 00:23:29.806005 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:23:29.806028 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:23:29.806035 kernel: Fallback order for Node 0: 0 Sep 13 00:23:29.806043 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Sep 13 00:23:29.806050 kernel: Policy zone: DMA32 Sep 13 00:23:29.806058 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:23:29.806068 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:23:29.806076 kernel: ftrace: allocating 40122 entries in 157 pages Sep 13 00:23:29.806083 kernel: ftrace: allocated 157 pages with 5 groups Sep 13 00:23:29.806091 kernel: Dynamic Preempt: voluntary Sep 13 00:23:29.806106 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:23:29.806114 kernel: rcu: RCU event tracing is enabled. Sep 13 00:23:29.806122 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:23:29.806129 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:23:29.806140 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:23:29.806150 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:23:29.806158 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:23:29.806166 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:23:29.806173 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:23:29.806181 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:23:29.806189 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:23:29.806196 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 13 00:23:29.806204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:23:29.806222 kernel: Console: colour VGA+ 80x25 Sep 13 00:23:29.806229 kernel: printk: legacy console [ttyS0] enabled Sep 13 00:23:29.806237 kernel: ACPI: Core revision 20240827 Sep 13 00:23:29.806245 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:23:29.806256 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:23:29.806263 kernel: x2apic enabled Sep 13 00:23:29.806274 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:23:29.806282 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 13 00:23:29.806290 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 13 00:23:29.806300 kernel: kvm-guest: setup PV IPIs Sep 13 00:23:29.806308 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:23:29.806316 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 00:23:29.806324 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 13 00:23:29.806332 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 00:23:29.806340 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 13 00:23:29.806348 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 13 00:23:29.806356 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:23:29.806366 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:23:29.806373 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:23:29.806381 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 13 00:23:29.806389 kernel: active return thunk: retbleed_return_thunk Sep 13 00:23:29.806397 kernel: RETBleed: Mitigation: untrained return thunk Sep 13 00:23:29.806405 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:23:29.806413 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:23:29.806421 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 13 00:23:29.806429 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 13 00:23:29.806439 kernel: active return thunk: srso_return_thunk Sep 13 00:23:29.806447 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 13 00:23:29.806455 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:23:29.806463 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:23:29.806471 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:23:29.806479 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:23:29.806487 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 13 00:23:29.806494 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:23:29.806502 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:23:29.806512 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 13 00:23:29.806520 kernel: landlock: Up and running. Sep 13 00:23:29.806528 kernel: SELinux: Initializing. Sep 13 00:23:29.806538 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:23:29.806546 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:23:29.806554 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 13 00:23:29.806562 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 13 00:23:29.806570 kernel: ... version: 0 Sep 13 00:23:29.806578 kernel: ... bit width: 48 Sep 13 00:23:29.806588 kernel: ... generic registers: 6 Sep 13 00:23:29.806596 kernel: ... value mask: 0000ffffffffffff Sep 13 00:23:29.806604 kernel: ... max period: 00007fffffffffff Sep 13 00:23:29.806611 kernel: ... fixed-purpose events: 0 Sep 13 00:23:29.806619 kernel: ... event mask: 000000000000003f Sep 13 00:23:29.806627 kernel: signal: max sigframe size: 1776 Sep 13 00:23:29.806635 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:23:29.806643 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:23:29.806651 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 13 00:23:29.806662 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:23:29.806669 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:23:29.806677 kernel: .... node #0, CPUs: #1 #2 #3 Sep 13 00:23:29.806685 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:23:29.806693 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 13 00:23:29.806701 kernel: Memory: 2430964K/2571752K available (14336K kernel code, 2432K rwdata, 9960K rodata, 53828K init, 1088K bss, 134860K reserved, 0K cma-reserved) Sep 13 00:23:29.806709 kernel: devtmpfs: initialized Sep 13 00:23:29.806717 kernel: x86/mm: Memory block size: 128MB Sep 13 00:23:29.806725 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:23:29.806736 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:23:29.806746 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:23:29.806754 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:23:29.806764 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:23:29.806772 kernel: audit: type=2000 audit(1757723006.923:1): state=initialized audit_enabled=0 res=1 Sep 13 00:23:29.806780 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:23:29.806788 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:23:29.806796 kernel: cpuidle: using governor menu Sep 13 00:23:29.806804 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:23:29.806814 kernel: dca service started, version 1.12.1 Sep 13 00:23:29.806822 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 13 00:23:29.806829 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 00:23:29.806837 kernel: PCI: Using configuration type 1 for base access Sep 13 00:23:29.806845 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:23:29.806853 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:23:29.806861 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:23:29.806869 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:23:29.806877 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:23:29.806886 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:23:29.806894 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:23:29.806902 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:23:29.806910 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:23:29.806918 kernel: ACPI: Interpreter enabled Sep 13 00:23:29.806925 kernel: ACPI: PM: (supports S0 S3 S5) Sep 13 00:23:29.806933 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:23:29.806941 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:23:29.806949 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:23:29.806959 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 00:23:29.806967 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:23:29.807283 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:23:29.807469 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 13 00:23:29.807594 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 13 00:23:29.807604 kernel: PCI host bridge to bus 0000:00 Sep 13 00:23:29.807741 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:23:29.807869 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:23:29.807986 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:23:29.808134 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 13 00:23:29.808248 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 00:23:29.808395 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 13 00:23:29.808747 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:23:29.808975 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 13 00:23:29.809177 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 13 00:23:29.809308 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Sep 13 00:23:29.809449 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Sep 13 00:23:29.809596 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Sep 13 00:23:29.809717 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:23:29.809897 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 13 00:23:29.810061 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Sep 13 00:23:29.810196 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Sep 13 00:23:29.810402 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Sep 13 00:23:29.810567 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 13 00:23:29.810692 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Sep 13 00:23:29.810815 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Sep 13 00:23:29.810943 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Sep 13 00:23:29.811122 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 13 00:23:29.811287 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Sep 13 00:23:29.811417 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Sep 13 00:23:29.811556 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 13 00:23:29.811681 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Sep 13 00:23:29.811829 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 13 00:23:29.811953 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 00:23:29.812120 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 13 00:23:29.812246 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Sep 13 00:23:29.812367 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Sep 13 00:23:29.812515 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 13 00:23:29.812640 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 13 00:23:29.812651 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:23:29.812663 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:23:29.812671 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:23:29.812679 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:23:29.812687 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 00:23:29.812695 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 00:23:29.812702 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 00:23:29.812710 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 00:23:29.812718 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 00:23:29.812726 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 00:23:29.812735 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 00:23:29.812743 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 00:23:29.812751 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 00:23:29.812759 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 00:23:29.812767 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 00:23:29.812774 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 00:23:29.812782 kernel: iommu: Default domain type: Translated Sep 13 00:23:29.812790 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:23:29.812798 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:23:29.812807 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:23:29.812817 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:23:29.812825 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 13 00:23:29.812970 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 00:23:29.813149 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 00:23:29.813274 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:23:29.813285 kernel: vgaarb: loaded Sep 13 00:23:29.813293 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:23:29.813306 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:23:29.813314 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:23:29.813322 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:23:29.813330 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:23:29.813338 kernel: pnp: PnP ACPI init Sep 13 00:23:29.813487 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 00:23:29.813499 kernel: pnp: PnP ACPI: found 6 devices Sep 13 00:23:29.813507 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:23:29.813515 kernel: NET: Registered PF_INET protocol family Sep 13 00:23:29.813527 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:23:29.813535 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:23:29.813543 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:23:29.813551 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:23:29.813558 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:23:29.813566 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:23:29.813574 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:23:29.813582 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:23:29.813592 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:23:29.813600 kernel: NET: Registered PF_XDP protocol family Sep 13 00:23:29.813714 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:23:29.813828 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:23:29.813938 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:23:29.814093 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 13 00:23:29.814219 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 00:23:29.814330 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 13 00:23:29.814340 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:23:29.814353 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 13 00:23:29.814361 kernel: Initialise system trusted keyrings Sep 13 00:23:29.814369 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:23:29.814377 kernel: Key type asymmetric registered Sep 13 00:23:29.814385 kernel: Asymmetric key parser 'x509' registered Sep 13 00:23:29.814393 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:23:29.814401 kernel: io scheduler mq-deadline registered Sep 13 00:23:29.814409 kernel: io scheduler kyber registered Sep 13 00:23:29.814416 kernel: io scheduler bfq registered Sep 13 00:23:29.814426 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:23:29.814435 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 00:23:29.814443 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 00:23:29.814450 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 00:23:29.814458 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:23:29.814466 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:23:29.814474 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:23:29.814482 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:23:29.814490 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:23:29.814631 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 13 00:23:29.814643 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:23:29.814757 kernel: rtc_cmos 00:04: registered as rtc0 Sep 13 00:23:29.814881 kernel: rtc_cmos 00:04: setting system clock to 2025-09-13T00:23:29 UTC (1757723009) Sep 13 00:23:29.814995 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 13 00:23:29.815006 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 13 00:23:29.815044 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:23:29.815052 kernel: Segment Routing with IPv6 Sep 13 00:23:29.815064 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:23:29.815072 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:23:29.815080 kernel: Key type dns_resolver registered Sep 13 00:23:29.815088 kernel: IPI shorthand broadcast: enabled Sep 13 00:23:29.815104 kernel: sched_clock: Marking stable (2984002533, 109768156)->(3115765723, -21995034) Sep 13 00:23:29.815112 kernel: registered taskstats version 1 Sep 13 00:23:29.815120 kernel: Loading compiled-in X.509 certificates Sep 13 00:23:29.815128 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: dd6b45f5ed9ac8d42d60bdb17f83ef06c8bcd8f6' Sep 13 00:23:29.815135 kernel: Demotion targets for Node 0: null Sep 13 00:23:29.815145 kernel: Key type .fscrypt registered Sep 13 00:23:29.815153 kernel: Key type fscrypt-provisioning registered Sep 13 00:23:29.815162 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:23:29.815170 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:23:29.815177 kernel: ima: No architecture policies found Sep 13 00:23:29.815185 kernel: clk: Disabling unused clocks Sep 13 00:23:29.815193 kernel: Warning: unable to open an initial console. Sep 13 00:23:29.815201 kernel: Freeing unused kernel image (initmem) memory: 53828K Sep 13 00:23:29.815211 kernel: Write protecting the kernel read-only data: 24576k Sep 13 00:23:29.815219 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 13 00:23:29.815227 kernel: Run /init as init process Sep 13 00:23:29.815235 kernel: with arguments: Sep 13 00:23:29.815242 kernel: /init Sep 13 00:23:29.815250 kernel: with environment: Sep 13 00:23:29.815258 kernel: HOME=/ Sep 13 00:23:29.815265 kernel: TERM=linux Sep 13 00:23:29.815273 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:23:29.815282 systemd[1]: Successfully made /usr/ read-only. Sep 13 00:23:29.815303 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 00:23:29.815315 systemd[1]: Detected virtualization kvm. Sep 13 00:23:29.815324 systemd[1]: Detected architecture x86-64. Sep 13 00:23:29.815332 systemd[1]: Running in initrd. Sep 13 00:23:29.815341 systemd[1]: No hostname configured, using default hostname. Sep 13 00:23:29.815352 systemd[1]: Hostname set to . Sep 13 00:23:29.815362 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:23:29.815371 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:23:29.815379 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:23:29.815388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:23:29.815397 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:23:29.815406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:23:29.815415 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:23:29.815427 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:23:29.815437 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:23:29.815445 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:23:29.815454 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:23:29.815463 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:23:29.815471 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:23:29.815492 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:23:29.815500 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:23:29.815509 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:23:29.815518 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:23:29.815527 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:23:29.815535 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:23:29.815544 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 13 00:23:29.815553 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:23:29.815561 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:23:29.815572 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:23:29.815581 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:23:29.815590 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:23:29.815599 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:23:29.815609 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:23:29.815621 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 13 00:23:29.815630 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:23:29.815638 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:23:29.815649 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:23:29.815658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:29.815666 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:23:29.815678 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:23:29.815686 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:23:29.815715 systemd-journald[220]: Collecting audit messages is disabled. Sep 13 00:23:29.815739 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:23:29.815748 systemd-journald[220]: Journal started Sep 13 00:23:29.815767 systemd-journald[220]: Runtime Journal (/run/log/journal/1b7b504b3fe44c928abc38fb39676410) is 6M, max 48.6M, 42.5M free. Sep 13 00:23:29.809076 systemd-modules-load[222]: Inserted module 'overlay' Sep 13 00:23:29.850928 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:23:29.850965 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:23:29.850982 kernel: Bridge firewalling registered Sep 13 00:23:29.836636 systemd-modules-load[222]: Inserted module 'br_netfilter' Sep 13 00:23:29.852327 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:23:29.855195 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:29.857870 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:23:29.864913 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:23:29.868240 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:23:29.876732 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:23:29.877659 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:23:29.888273 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 13 00:23:29.891336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:23:29.892672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:23:29.895079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:23:29.896947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:23:29.900778 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:23:29.903357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:23:29.929364 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=21b29c6e420cf06e0546ff797fc1285d986af130e4ba1abb9f27cb6343b53294 Sep 13 00:23:29.952848 systemd-resolved[260]: Positive Trust Anchors: Sep 13 00:23:29.952864 systemd-resolved[260]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:23:29.952896 systemd-resolved[260]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:23:29.955492 systemd-resolved[260]: Defaulting to hostname 'linux'. Sep 13 00:23:29.961834 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:23:29.963245 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:23:30.037064 kernel: SCSI subsystem initialized Sep 13 00:23:30.046063 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:23:30.058078 kernel: iscsi: registered transport (tcp) Sep 13 00:23:30.079068 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:23:30.079165 kernel: QLogic iSCSI HBA Driver Sep 13 00:23:30.101235 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:23:30.127297 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:23:30.129994 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:23:30.193947 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:23:30.201840 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:23:30.257076 kernel: raid6: avx2x4 gen() 27054 MB/s Sep 13 00:23:30.274059 kernel: raid6: avx2x2 gen() 28144 MB/s Sep 13 00:23:30.291112 kernel: raid6: avx2x1 gen() 24490 MB/s Sep 13 00:23:30.291160 kernel: raid6: using algorithm avx2x2 gen() 28144 MB/s Sep 13 00:23:30.309092 kernel: raid6: .... xor() 19730 MB/s, rmw enabled Sep 13 00:23:30.309149 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:23:30.330048 kernel: xor: automatically using best checksumming function avx Sep 13 00:23:30.515064 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:23:30.525634 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:23:30.528641 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:23:30.569033 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 13 00:23:30.577521 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:23:30.582212 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:23:30.621527 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 13 00:23:30.653878 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:23:30.656161 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:23:30.810559 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:23:30.814533 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:23:30.850052 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 13 00:23:30.856221 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:23:30.859236 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:23:30.876046 kernel: AES CTR mode by8 optimization enabled Sep 13 00:23:30.879050 kernel: libata version 3.00 loaded. Sep 13 00:23:30.902060 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 13 00:23:30.902155 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 00:23:30.903885 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:23:30.906954 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 00:23:30.904089 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:30.924106 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:23:30.924139 kernel: GPT:9289727 != 19775487 Sep 13 00:23:30.924152 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:23:30.924165 kernel: GPT:9289727 != 19775487 Sep 13 00:23:30.924177 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:23:30.924190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:23:30.924204 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 13 00:23:30.924536 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 13 00:23:30.924752 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 00:23:30.906962 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:30.909213 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:30.928633 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 00:23:30.931183 kernel: scsi host0: ahci Sep 13 00:23:30.934049 kernel: scsi host1: ahci Sep 13 00:23:30.934404 kernel: scsi host2: ahci Sep 13 00:23:30.938043 kernel: scsi host3: ahci Sep 13 00:23:30.941100 kernel: scsi host4: ahci Sep 13 00:23:30.943120 kernel: scsi host5: ahci Sep 13 00:23:30.948628 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Sep 13 00:23:30.948659 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Sep 13 00:23:30.948675 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Sep 13 00:23:30.950047 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Sep 13 00:23:30.951923 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Sep 13 00:23:30.951951 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Sep 13 00:23:30.965222 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:23:31.001791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:31.017958 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:23:31.026044 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:23:31.026147 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:23:31.037113 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:23:31.039784 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:23:31.069877 disk-uuid[634]: Primary Header is updated. Sep 13 00:23:31.069877 disk-uuid[634]: Secondary Entries is updated. Sep 13 00:23:31.069877 disk-uuid[634]: Secondary Header is updated. Sep 13 00:23:31.073221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:23:31.078036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:23:31.269077 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:23:31.269154 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 13 00:23:31.269166 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:23:31.270050 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:23:31.270076 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 00:23:31.271044 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 00:23:31.272329 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 13 00:23:31.272379 kernel: ata3.00: applying bridge limits Sep 13 00:23:31.273038 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 00:23:31.274058 kernel: ata3.00: LPM support broken, forcing max_power Sep 13 00:23:31.274075 kernel: ata3.00: configured for UDMA/100 Sep 13 00:23:31.275108 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 13 00:23:31.335060 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 13 00:23:31.335354 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 13 00:23:31.352038 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 13 00:23:31.708271 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:23:31.710937 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:23:31.713245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:23:31.713320 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:23:31.716440 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:23:31.741079 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:23:32.079062 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:23:32.079159 disk-uuid[635]: The operation has completed successfully. Sep 13 00:23:32.108924 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:23:32.109073 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:23:32.144863 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:23:32.170841 sh[663]: Success Sep 13 00:23:32.189093 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:23:32.189142 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:23:32.190169 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 13 00:23:32.200055 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 13 00:23:32.233906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:23:32.236573 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:23:32.264105 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:23:32.272584 kernel: BTRFS: device fsid ca815b72-c68a-4b5e-8622-cfb6842bab47 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (675) Sep 13 00:23:32.272614 kernel: BTRFS info (device dm-0): first mount of filesystem ca815b72-c68a-4b5e-8622-cfb6842bab47 Sep 13 00:23:32.272625 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:32.278182 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:23:32.278258 kernel: BTRFS info (device dm-0): enabling free space tree Sep 13 00:23:32.279562 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:23:32.281133 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 13 00:23:32.281659 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:23:32.282726 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:23:32.285599 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:23:32.315342 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (708) Sep 13 00:23:32.315394 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:23:32.315410 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:32.319069 kernel: BTRFS info (device vda6): turning on async discard Sep 13 00:23:32.319094 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 00:23:32.324054 kernel: BTRFS info (device vda6): last unmount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:23:32.324518 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:23:32.327194 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:23:32.596303 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:23:32.602273 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:23:32.611855 ignition[752]: Ignition 2.21.0 Sep 13 00:23:32.611868 ignition[752]: Stage: fetch-offline Sep 13 00:23:32.611922 ignition[752]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:32.611933 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:23:32.612053 ignition[752]: parsed url from cmdline: "" Sep 13 00:23:32.612057 ignition[752]: no config URL provided Sep 13 00:23:32.612063 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:23:32.612073 ignition[752]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:23:32.612101 ignition[752]: op(1): [started] loading QEMU firmware config module Sep 13 00:23:32.612106 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:23:32.620330 ignition[752]: op(1): [finished] loading QEMU firmware config module Sep 13 00:23:32.646824 systemd-networkd[851]: lo: Link UP Sep 13 00:23:32.646835 systemd-networkd[851]: lo: Gained carrier Sep 13 00:23:32.648449 systemd-networkd[851]: Enumeration completed Sep 13 00:23:32.648605 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:23:32.648852 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:23:32.648856 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:23:32.649365 systemd[1]: Reached target network.target - Network. Sep 13 00:23:32.650296 systemd-networkd[851]: eth0: Link UP Sep 13 00:23:32.650445 systemd-networkd[851]: eth0: Gained carrier Sep 13 00:23:32.650460 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:23:32.668061 systemd-networkd[851]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:23:32.673086 ignition[752]: parsing config with SHA512: 42de7b39ad53fbd3b18fb81edb8564a53c73a54edf7ea407df9ae5b63d31a28cd3afc8d49456910f6c9d66f17c6fac55d773ec00df74dfcffa83476ec36974fa Sep 13 00:23:32.676638 unknown[752]: fetched base config from "system" Sep 13 00:23:32.676648 unknown[752]: fetched user config from "qemu" Sep 13 00:23:32.676991 ignition[752]: fetch-offline: fetch-offline passed Sep 13 00:23:32.677078 ignition[752]: Ignition finished successfully Sep 13 00:23:32.680212 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:23:32.682454 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:23:32.683466 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:23:32.818689 ignition[858]: Ignition 2.21.0 Sep 13 00:23:32.818704 ignition[858]: Stage: kargs Sep 13 00:23:32.818873 ignition[858]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:32.818885 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:23:32.819764 ignition[858]: kargs: kargs passed Sep 13 00:23:32.819817 ignition[858]: Ignition finished successfully Sep 13 00:23:32.824248 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:23:32.827228 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:23:32.870261 ignition[866]: Ignition 2.21.0 Sep 13 00:23:32.870280 ignition[866]: Stage: disks Sep 13 00:23:32.870458 ignition[866]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:32.870475 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:23:32.871959 ignition[866]: disks: disks passed Sep 13 00:23:32.872051 ignition[866]: Ignition finished successfully Sep 13 00:23:32.875298 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:23:32.876699 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:23:32.878650 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:23:32.881053 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:23:32.882110 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:23:32.884295 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:23:32.886251 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:23:32.919756 systemd-fsck[876]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 13 00:23:32.927806 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:23:32.930390 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:23:33.037059 kernel: EXT4-fs (vda9): mounted filesystem 7f859ed0-e8c8-40c1-91d3-e1e964d8c4e8 r/w with ordered data mode. Quota mode: none. Sep 13 00:23:33.037787 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:23:33.038502 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:23:33.042487 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:23:33.043562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:23:33.045473 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:23:33.045516 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:23:33.045541 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:23:33.065404 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:23:33.066943 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:23:33.071041 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (884) Sep 13 00:23:33.073194 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:23:33.073219 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:33.076422 kernel: BTRFS info (device vda6): turning on async discard Sep 13 00:23:33.076476 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 00:23:33.079028 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:23:33.106945 initrd-setup-root[908]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:23:33.112184 initrd-setup-root[915]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:23:33.117360 initrd-setup-root[922]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:23:33.121906 initrd-setup-root[929]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:23:33.215258 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:23:33.218440 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:23:33.221049 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:23:33.250044 kernel: BTRFS info (device vda6): last unmount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:23:33.263167 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:23:33.271574 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:23:33.279775 ignition[998]: INFO : Ignition 2.21.0 Sep 13 00:23:33.279775 ignition[998]: INFO : Stage: mount Sep 13 00:23:33.282138 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:33.282138 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:23:33.282138 ignition[998]: INFO : mount: mount passed Sep 13 00:23:33.282138 ignition[998]: INFO : Ignition finished successfully Sep 13 00:23:33.285220 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:23:33.288269 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:23:33.323727 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:23:33.351752 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Sep 13 00:23:33.351790 kernel: BTRFS info (device vda6): first mount of filesystem 9cd66393-e258-466a-9c7b-a40c48e4924e Sep 13 00:23:33.351802 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:23:33.355553 kernel: BTRFS info (device vda6): turning on async discard Sep 13 00:23:33.355619 kernel: BTRFS info (device vda6): enabling free space tree Sep 13 00:23:33.357443 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:23:33.392517 ignition[1027]: INFO : Ignition 2.21.0 Sep 13 00:23:33.392517 ignition[1027]: INFO : Stage: files Sep 13 00:23:33.394442 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:33.394442 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:23:33.394442 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:23:33.397939 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:23:33.397939 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:23:33.397939 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:23:33.397939 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:23:33.397939 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:23:33.397620 unknown[1027]: wrote ssh authorized keys file for user: core Sep 13 00:23:33.405837 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:23:33.405837 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:23:33.462792 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:23:33.885235 systemd-networkd[851]: eth0: Gained IPv6LL Sep 13 00:23:33.935481 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:23:33.938128 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:23:33.938128 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:23:33.938128 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:23:33.938128 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:23:33.938128 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:23:33.938128 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:23:33.938128 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:23:33.938128 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:23:33.955032 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:23:33.955032 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:23:33.955032 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:33.955032 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:33.955032 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:33.955032 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:23:34.272730 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:23:34.660486 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:23:34.660486 ignition[1027]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:23:34.664979 ignition[1027]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:23:34.667249 ignition[1027]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:23:34.667249 ignition[1027]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:23:34.667249 ignition[1027]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 13 00:23:34.667249 ignition[1027]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:23:34.673936 ignition[1027]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:23:34.673936 ignition[1027]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 13 00:23:34.673936 ignition[1027]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:23:34.689903 ignition[1027]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:23:34.695900 ignition[1027]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:23:34.697597 ignition[1027]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:23:34.697597 ignition[1027]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:23:34.697597 ignition[1027]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:23:34.697597 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:23:34.697597 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:23:34.697597 ignition[1027]: INFO : files: files passed Sep 13 00:23:34.697597 ignition[1027]: INFO : Ignition finished successfully Sep 13 00:23:34.706798 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:23:34.708542 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:23:34.711340 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:23:34.729601 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:23:34.729731 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:23:34.733329 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:23:34.736869 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:23:34.736869 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:23:34.740396 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:23:34.739120 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:23:34.742111 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:23:34.744205 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:23:34.803267 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:23:34.803393 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:23:34.806867 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:23:34.807963 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:23:34.809975 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:23:34.811118 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:23:34.845995 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:23:34.848963 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:23:34.871146 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:23:34.871340 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:23:34.873537 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:23:34.875704 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:23:34.875864 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:23:34.880511 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:23:34.880688 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:23:34.882606 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:23:34.882982 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:23:34.883544 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:23:34.883902 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 13 00:23:34.884469 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:23:34.884831 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:23:34.885408 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:23:34.885769 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:23:34.886321 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:23:34.886653 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:23:34.886807 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:23:34.904159 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:23:34.904349 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:23:34.906361 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:23:34.908507 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:23:34.908815 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:23:34.908977 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:23:34.914965 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:23:34.915129 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:23:34.917573 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:23:34.918691 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:23:34.924108 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:23:34.925480 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:23:34.927789 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:23:34.929621 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:23:34.929749 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:23:34.931570 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:23:34.931685 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:23:34.932472 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:23:34.932622 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:23:34.934260 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:23:34.934436 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:23:34.939087 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:23:34.942982 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:23:34.943908 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:23:34.944158 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:23:34.945964 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:23:34.946136 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:23:34.956138 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:23:34.957312 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:23:34.971499 ignition[1083]: INFO : Ignition 2.21.0 Sep 13 00:23:34.971499 ignition[1083]: INFO : Stage: umount Sep 13 00:23:34.973615 ignition[1083]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:23:34.973615 ignition[1083]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:23:34.976189 ignition[1083]: INFO : umount: umount passed Sep 13 00:23:34.976189 ignition[1083]: INFO : Ignition finished successfully Sep 13 00:23:34.976764 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:23:34.976910 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:23:34.978700 systemd[1]: Stopped target network.target - Network. Sep 13 00:23:34.980605 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:23:34.980673 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:23:34.981726 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:23:34.981781 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:23:34.984766 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:23:34.984825 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:23:34.986638 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:23:34.986689 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:23:34.989027 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:23:34.990975 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:23:34.994055 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:23:34.998436 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:23:34.998566 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:23:35.002407 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 13 00:23:35.002702 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:23:35.002852 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:23:35.005710 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 13 00:23:35.006372 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 13 00:23:35.007270 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:23:35.007322 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:23:35.008378 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:23:35.012462 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:23:35.012520 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:23:35.013834 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:23:35.013888 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:23:35.021181 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:23:35.021234 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:23:35.023203 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:23:35.023252 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:23:35.026250 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:23:35.029842 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:23:35.029936 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 13 00:23:35.053749 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:23:35.055164 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:23:35.055622 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:23:35.055686 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:23:35.059968 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:23:35.060039 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:23:35.061110 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:23:35.061179 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:23:35.061978 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:23:35.062060 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:23:35.067845 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:23:35.067911 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:23:35.072067 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:23:35.072279 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 13 00:23:35.072350 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:23:35.078186 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:23:35.078250 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:23:35.082437 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:23:35.082500 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:23:35.085260 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:23:35.085322 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:23:35.089954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:23:35.090041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:35.096383 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 13 00:23:35.096459 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 13 00:23:35.096519 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:23:35.096584 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 13 00:23:35.097299 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:23:35.097442 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:23:35.100727 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:23:35.100865 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:23:35.104592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:23:35.104725 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:23:35.109113 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:23:35.111763 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:23:35.111842 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:23:35.116073 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:23:35.146069 systemd[1]: Switching root. Sep 13 00:23:35.191748 systemd-journald[220]: Journal stopped Sep 13 00:23:36.840368 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Sep 13 00:23:36.840427 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:23:36.840441 kernel: SELinux: policy capability open_perms=1 Sep 13 00:23:36.840466 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:23:36.840482 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:23:36.840494 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:23:36.840505 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:23:36.840516 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:23:36.840532 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:23:36.840544 kernel: SELinux: policy capability userspace_initial_context=0 Sep 13 00:23:36.840555 kernel: audit: type=1403 audit(1757723016.010:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:23:36.840576 systemd[1]: Successfully loaded SELinux policy in 49.706ms. Sep 13 00:23:36.840599 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.336ms. Sep 13 00:23:36.840613 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 13 00:23:36.840625 systemd[1]: Detected virtualization kvm. Sep 13 00:23:36.840638 systemd[1]: Detected architecture x86-64. Sep 13 00:23:36.840650 systemd[1]: Detected first boot. Sep 13 00:23:36.840663 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:23:36.840675 zram_generator::config[1128]: No configuration found. Sep 13 00:23:36.840688 kernel: Guest personality initialized and is inactive Sep 13 00:23:36.840705 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 13 00:23:36.840717 kernel: Initialized host personality Sep 13 00:23:36.840728 kernel: NET: Registered PF_VSOCK protocol family Sep 13 00:23:36.840740 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:23:36.840753 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 13 00:23:36.840765 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:23:36.840777 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:23:36.840789 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:23:36.840806 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:23:36.840823 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:23:36.840835 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:23:36.840847 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:23:36.840859 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:23:36.840872 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:23:36.840887 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:23:36.840909 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:23:36.840921 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:23:36.840939 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:23:36.840952 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:23:36.840964 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:23:36.840977 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:23:36.840992 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:23:36.841004 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:23:36.841063 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:23:36.841075 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:23:36.841090 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:23:36.841103 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:23:36.841114 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:23:36.841126 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:23:36.841139 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:23:36.841151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:23:36.841162 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:23:36.841175 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:23:36.841187 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:23:36.841201 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:23:36.841213 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 13 00:23:36.841224 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:23:36.841236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:23:36.841248 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:23:36.841260 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:23:36.841272 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:23:36.841284 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:23:36.841296 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:23:36.841310 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:36.841322 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:23:36.841334 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:23:36.841346 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:23:36.841359 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:23:36.841371 systemd[1]: Reached target machines.target - Containers. Sep 13 00:23:36.841383 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:23:36.841395 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:23:36.841412 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:23:36.841424 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:23:36.841436 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:23:36.841448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:23:36.841460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:23:36.841471 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:23:36.841484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:23:36.841496 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:23:36.841508 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:23:36.841524 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:23:36.841536 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:23:36.841548 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:23:36.841561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:23:36.841573 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:23:36.841585 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:23:36.841597 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:23:36.841608 kernel: fuse: init (API version 7.41) Sep 13 00:23:36.841620 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:23:36.841637 kernel: loop: module loaded Sep 13 00:23:36.841651 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 13 00:23:36.841663 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:23:36.841675 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:23:36.841687 systemd[1]: Stopped verity-setup.service. Sep 13 00:23:36.841704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:36.841720 kernel: ACPI: bus type drm_connector registered Sep 13 00:23:36.841732 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:23:36.841743 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:23:36.841755 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:23:36.841771 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:23:36.841783 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:23:36.841796 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:23:36.841807 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:23:36.841819 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:23:36.841831 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:23:36.841843 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:23:36.841855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:23:36.841867 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:23:36.841883 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:23:36.841903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:23:36.841937 systemd-journald[1199]: Collecting audit messages is disabled. Sep 13 00:23:36.841961 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:23:36.841974 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:23:36.841985 systemd-journald[1199]: Journal started Sep 13 00:23:36.842029 systemd-journald[1199]: Runtime Journal (/run/log/journal/1b7b504b3fe44c928abc38fb39676410) is 6M, max 48.6M, 42.5M free. Sep 13 00:23:36.570204 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:23:36.591142 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:23:36.591644 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:23:36.845212 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:23:36.846799 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:23:36.847058 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:23:36.848614 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:23:36.848822 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:23:36.850491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:23:36.852152 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:23:36.853943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:23:36.855746 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 13 00:23:36.869181 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:23:36.872245 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:23:36.874612 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:23:36.876040 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:23:36.876072 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:23:36.878296 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 13 00:23:36.887140 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:23:36.888703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:36.890422 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:23:36.894226 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:23:36.896153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:23:36.898694 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:23:36.900379 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:23:36.901781 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:23:36.906056 systemd-journald[1199]: Time spent on flushing to /var/log/journal/1b7b504b3fe44c928abc38fb39676410 is 20.283ms for 984 entries. Sep 13 00:23:36.906056 systemd-journald[1199]: System Journal (/var/log/journal/1b7b504b3fe44c928abc38fb39676410) is 8M, max 195.6M, 187.6M free. Sep 13 00:23:36.933120 systemd-journald[1199]: Received client request to flush runtime journal. Sep 13 00:23:36.935610 kernel: loop0: detected capacity change from 0 to 224512 Sep 13 00:23:36.906169 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:23:36.917242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:23:36.920863 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:23:36.922394 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:23:36.934200 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:23:36.935774 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:23:36.939215 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 13 00:23:36.941789 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:23:36.953289 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:23:36.962951 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:23:36.965259 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:23:36.971579 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 13 00:23:36.971595 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 13 00:23:36.978971 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:23:36.982784 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:23:36.985859 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 13 00:23:36.998142 kernel: loop1: detected capacity change from 0 to 146240 Sep 13 00:23:37.027430 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:23:37.030048 kernel: loop2: detected capacity change from 0 to 113872 Sep 13 00:23:37.031352 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:23:37.065063 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Sep 13 00:23:37.065085 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Sep 13 00:23:37.071537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:23:37.075843 kernel: loop3: detected capacity change from 0 to 224512 Sep 13 00:23:37.085055 kernel: loop4: detected capacity change from 0 to 146240 Sep 13 00:23:37.102045 kernel: loop5: detected capacity change from 0 to 113872 Sep 13 00:23:37.114197 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:23:37.114949 (sd-merge)[1273]: Merged extensions into '/usr'. Sep 13 00:23:37.120472 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:23:37.120492 systemd[1]: Reloading... Sep 13 00:23:37.189056 zram_generator::config[1297]: No configuration found. Sep 13 00:23:37.254319 ldconfig[1242]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:23:37.298092 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:23:37.380330 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:23:37.380521 systemd[1]: Reloading finished in 259 ms. Sep 13 00:23:37.416987 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:23:37.418532 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:23:37.433427 systemd[1]: Starting ensure-sysext.service... Sep 13 00:23:37.435231 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:23:37.445274 systemd[1]: Reload requested from client PID 1337 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:23:37.445289 systemd[1]: Reloading... Sep 13 00:23:37.488487 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 13 00:23:37.490495 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 13 00:23:37.490841 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:23:37.491141 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:23:37.493119 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:23:37.493478 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 13 00:23:37.493612 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Sep 13 00:23:37.498038 zram_generator::config[1366]: No configuration found. Sep 13 00:23:37.499403 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:23:37.500115 systemd-tmpfiles[1339]: Skipping /boot Sep 13 00:23:37.513621 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:23:37.513762 systemd-tmpfiles[1339]: Skipping /boot Sep 13 00:23:37.591264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:23:37.672700 systemd[1]: Reloading finished in 227 ms. Sep 13 00:23:37.700464 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:23:37.720829 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:23:37.729550 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 00:23:37.732953 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:23:37.735526 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:23:37.748389 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:23:37.752520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:23:37.754997 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:23:37.758654 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:37.760333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:23:37.768352 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:23:37.770620 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:23:37.773250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:23:37.774385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:37.774492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:23:37.774589 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:37.780138 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:23:37.782411 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:23:37.785467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:23:37.786232 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:23:37.788508 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:23:37.790083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:23:37.792117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:23:37.792784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:23:37.802753 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:37.803006 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:23:37.804842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:23:37.807402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:23:37.811307 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:23:37.812446 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:37.812609 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:23:37.818326 augenrules[1439]: No rules Sep 13 00:23:37.821502 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:23:37.822657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:37.823360 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Sep 13 00:23:37.825722 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:23:37.826000 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 00:23:37.828542 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:23:37.830673 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:23:37.830988 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:23:37.832847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:23:37.833219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:23:37.835161 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:23:37.835491 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:23:37.837208 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:23:37.842808 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:23:37.849959 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:37.851443 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 00:23:37.852460 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:23:37.853678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:23:37.856471 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:23:37.867606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:23:37.877071 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:23:37.878246 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:23:37.878355 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 13 00:23:37.878479 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:23:37.878559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:23:37.879701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:23:37.882506 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:23:37.884320 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:23:37.884630 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:23:37.886530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:23:37.886787 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:23:37.888500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:23:37.890065 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:23:37.900889 systemd[1]: Finished ensure-sysext.service. Sep 13 00:23:37.908162 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:23:37.910088 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:23:37.913161 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:23:37.916044 augenrules[1454]: /sbin/augenrules: No change Sep 13 00:23:37.920980 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:23:37.921265 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:23:37.922926 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:23:37.927260 augenrules[1516]: No rules Sep 13 00:23:37.929381 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:23:37.929657 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 00:23:37.939049 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 00:23:38.005066 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:23:38.015102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:23:38.017823 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:23:38.022045 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:23:38.030046 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:23:38.042298 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:23:38.085391 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 00:23:38.085724 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 00:23:38.090089 systemd-networkd[1503]: lo: Link UP Sep 13 00:23:38.090099 systemd-networkd[1503]: lo: Gained carrier Sep 13 00:23:38.091833 systemd-networkd[1503]: Enumeration completed Sep 13 00:23:38.092300 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:23:38.092313 systemd-networkd[1503]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:23:38.095310 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:23:38.098263 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 13 00:23:38.102127 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:23:38.103660 systemd-networkd[1503]: eth0: Link UP Sep 13 00:23:38.103818 systemd-networkd[1503]: eth0: Gained carrier Sep 13 00:23:38.103843 systemd-networkd[1503]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:23:38.115094 systemd-networkd[1503]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:23:38.129489 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 13 00:23:38.154873 systemd-resolved[1408]: Positive Trust Anchors: Sep 13 00:23:38.154893 systemd-resolved[1408]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:23:38.154927 systemd-resolved[1408]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:23:38.158669 systemd-resolved[1408]: Defaulting to hostname 'linux'. Sep 13 00:23:38.160732 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:23:38.162080 systemd[1]: Reached target network.target - Network. Sep 13 00:23:38.163063 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:23:38.177198 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:23:38.211052 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:23:38.629830 systemd-timesyncd[1505]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:23:38.630043 systemd-timesyncd[1505]: Initial clock synchronization to Sat 2025-09-13 00:23:38.629731 UTC. Sep 13 00:23:38.630089 systemd-resolved[1408]: Clock change detected. Flushing caches. Sep 13 00:23:38.630661 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:23:38.639240 kernel: kvm_amd: TSC scaling supported Sep 13 00:23:38.639278 kernel: kvm_amd: Nested Virtualization enabled Sep 13 00:23:38.639291 kernel: kvm_amd: Nested Paging enabled Sep 13 00:23:38.639316 kernel: kvm_amd: LBR virtualization supported Sep 13 00:23:38.639329 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 13 00:23:38.639344 kernel: kvm_amd: Virtual GIF supported Sep 13 00:23:38.690829 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:23:38.730354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:23:38.731831 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:23:38.732971 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:23:38.734197 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:23:38.735426 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 13 00:23:38.736711 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:23:38.738033 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:23:38.739227 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:23:38.740451 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:23:38.740496 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:23:38.741413 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:23:38.743478 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:23:38.746019 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:23:38.749695 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 13 00:23:38.751137 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 13 00:23:38.752335 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 13 00:23:38.764374 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:23:38.765775 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 13 00:23:38.767496 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:23:38.769213 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:23:38.783311 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:23:38.784240 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:23:38.784268 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:23:38.785308 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:23:38.787335 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:23:38.789214 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:23:38.791293 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:23:38.794914 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:23:38.796039 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:23:38.797905 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 13 00:23:38.798867 jq[1567]: false Sep 13 00:23:38.800134 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:23:38.802291 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:23:38.807531 extend-filesystems[1568]: Found /dev/vda6 Sep 13 00:23:38.810660 extend-filesystems[1568]: Found /dev/vda9 Sep 13 00:23:38.813050 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:23:38.815831 extend-filesystems[1568]: Checking size of /dev/vda9 Sep 13 00:23:38.815459 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:23:38.820370 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:23:38.821431 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing passwd entry cache Sep 13 00:23:38.821447 oslogin_cache_refresh[1569]: Refreshing passwd entry cache Sep 13 00:23:38.822258 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:23:38.822804 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:23:38.824011 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:23:38.826986 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:23:38.829969 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:23:38.831727 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:23:38.831864 oslogin_cache_refresh[1569]: Failure getting users, quitting Sep 13 00:23:38.832043 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting users, quitting Sep 13 00:23:38.832043 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 00:23:38.831883 oslogin_cache_refresh[1569]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 13 00:23:38.832032 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:23:38.834168 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Refreshing group entry cache Sep 13 00:23:38.834124 oslogin_cache_refresh[1569]: Refreshing group entry cache Sep 13 00:23:38.834582 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:23:38.836134 jq[1589]: true Sep 13 00:23:38.834893 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:23:38.838259 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:23:38.838542 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:23:38.845803 jq[1593]: true Sep 13 00:23:38.846813 update_engine[1585]: I20250913 00:23:38.846215 1585 main.cc:92] Flatcar Update Engine starting Sep 13 00:23:38.849945 (ntainerd)[1594]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:23:38.863422 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Failure getting groups, quitting Sep 13 00:23:38.863422 google_oslogin_nss_cache[1569]: oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 00:23:38.863325 oslogin_cache_refresh[1569]: Failure getting groups, quitting Sep 13 00:23:38.863341 oslogin_cache_refresh[1569]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 13 00:23:38.870826 tar[1591]: linux-amd64/LICENSE Sep 13 00:23:38.870826 tar[1591]: linux-amd64/helm Sep 13 00:23:38.884595 dbus-daemon[1565]: [system] SELinux support is enabled Sep 13 00:23:38.885049 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:23:38.890717 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:23:38.890751 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:23:38.892356 update_engine[1585]: I20250913 00:23:38.891204 1585 update_check_scheduler.cc:74] Next update check in 5m1s Sep 13 00:23:38.892073 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:23:38.892088 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:23:38.893384 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:23:38.897496 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:23:38.912049 systemd-logind[1582]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 00:23:38.912071 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:23:38.912382 systemd-logind[1582]: New seat seat0. Sep 13 00:23:38.913819 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:23:38.921537 extend-filesystems[1568]: Resized partition /dev/vda9 Sep 13 00:23:38.922126 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 13 00:23:38.922415 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 13 00:23:38.927001 extend-filesystems[1625]: resize2fs 1.47.2 (1-Jan-2025) Sep 13 00:23:38.931606 bash[1620]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:23:38.931871 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:23:38.933632 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:23:38.936063 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:23:38.952826 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:23:39.078624 locksmithd[1621]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:23:39.265420 containerd[1594]: time="2025-09-13T00:23:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 13 00:23:39.265420 containerd[1594]: time="2025-09-13T00:23:39.264336533Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 13 00:23:39.265625 extend-filesystems[1625]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:23:39.265625 extend-filesystems[1625]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:23:39.265625 extend-filesystems[1625]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:23:39.271095 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:23:39.266529 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:23:39.271508 extend-filesystems[1568]: Resized filesystem in /dev/vda9 Sep 13 00:23:39.266831 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:23:39.279693 containerd[1594]: time="2025-09-13T00:23:39.278920844Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.259µs" Sep 13 00:23:39.279693 containerd[1594]: time="2025-09-13T00:23:39.278968794Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 13 00:23:39.279693 containerd[1594]: time="2025-09-13T00:23:39.278989202Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 13 00:23:39.279693 containerd[1594]: time="2025-09-13T00:23:39.279194577Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 13 00:23:39.279693 containerd[1594]: time="2025-09-13T00:23:39.279211789Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 13 00:23:39.279693 containerd[1594]: time="2025-09-13T00:23:39.279241004Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 00:23:39.279693 containerd[1594]: time="2025-09-13T00:23:39.279320724Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 13 00:23:39.279693 containerd[1594]: time="2025-09-13T00:23:39.279336874Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 00:23:39.279693 containerd[1594]: time="2025-09-13T00:23:39.279683865Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 13 00:23:39.279693 containerd[1594]: time="2025-09-13T00:23:39.279701297Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 00:23:39.279978 containerd[1594]: time="2025-09-13T00:23:39.279714011Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 13 00:23:39.279978 containerd[1594]: time="2025-09-13T00:23:39.279724821Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 13 00:23:39.279978 containerd[1594]: time="2025-09-13T00:23:39.279934294Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 13 00:23:39.280248 containerd[1594]: time="2025-09-13T00:23:39.280215411Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 00:23:39.280279 containerd[1594]: time="2025-09-13T00:23:39.280263562Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 13 00:23:39.280301 containerd[1594]: time="2025-09-13T00:23:39.280277127Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 13 00:23:39.280353 containerd[1594]: time="2025-09-13T00:23:39.280326971Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 13 00:23:39.280825 containerd[1594]: time="2025-09-13T00:23:39.280775031Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 13 00:23:39.280915 containerd[1594]: time="2025-09-13T00:23:39.280888383Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:23:39.288039 containerd[1594]: time="2025-09-13T00:23:39.287996173Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 13 00:23:39.288122 containerd[1594]: time="2025-09-13T00:23:39.288053651Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 13 00:23:39.288122 containerd[1594]: time="2025-09-13T00:23:39.288070292Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 13 00:23:39.288122 containerd[1594]: time="2025-09-13T00:23:39.288081854Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 13 00:23:39.288122 containerd[1594]: time="2025-09-13T00:23:39.288101270Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 13 00:23:39.288122 containerd[1594]: time="2025-09-13T00:23:39.288113914Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 13 00:23:39.288211 containerd[1594]: time="2025-09-13T00:23:39.288125706Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 13 00:23:39.288211 containerd[1594]: time="2025-09-13T00:23:39.288137298Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 13 00:23:39.288211 containerd[1594]: time="2025-09-13T00:23:39.288146745Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 13 00:23:39.288211 containerd[1594]: time="2025-09-13T00:23:39.288156163Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 13 00:23:39.288211 containerd[1594]: time="2025-09-13T00:23:39.288164068Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 13 00:23:39.288211 containerd[1594]: time="2025-09-13T00:23:39.288175379Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 13 00:23:39.288325 containerd[1594]: time="2025-09-13T00:23:39.288298049Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 13 00:23:39.288325 containerd[1594]: time="2025-09-13T00:23:39.288315572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 13 00:23:39.288366 containerd[1594]: time="2025-09-13T00:23:39.288329849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 13 00:23:39.288366 containerd[1594]: time="2025-09-13T00:23:39.288340489Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 13 00:23:39.288366 containerd[1594]: time="2025-09-13T00:23:39.288350898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 13 00:23:39.288366 containerd[1594]: time="2025-09-13T00:23:39.288361117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 13 00:23:39.288441 containerd[1594]: time="2025-09-13T00:23:39.288372248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 13 00:23:39.288441 containerd[1594]: time="2025-09-13T00:23:39.288389681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 13 00:23:39.288441 containerd[1594]: time="2025-09-13T00:23:39.288405972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 13 00:23:39.288441 containerd[1594]: time="2025-09-13T00:23:39.288417523Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 13 00:23:39.288441 containerd[1594]: time="2025-09-13T00:23:39.288432101Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 13 00:23:39.288537 containerd[1594]: time="2025-09-13T00:23:39.288495950Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 13 00:23:39.288537 containerd[1594]: time="2025-09-13T00:23:39.288508404Z" level=info msg="Start snapshots syncer" Sep 13 00:23:39.288537 containerd[1594]: time="2025-09-13T00:23:39.288526427Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 13 00:23:39.288780 containerd[1594]: time="2025-09-13T00:23:39.288740309Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 13 00:23:39.288900 containerd[1594]: time="2025-09-13T00:23:39.288806883Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 13 00:23:39.288970 containerd[1594]: time="2025-09-13T00:23:39.288944001Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 13 00:23:39.289105 containerd[1594]: time="2025-09-13T00:23:39.289078563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 13 00:23:39.289105 containerd[1594]: time="2025-09-13T00:23:39.289102478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 13 00:23:39.289146 containerd[1594]: time="2025-09-13T00:23:39.289119800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 13 00:23:39.289146 containerd[1594]: time="2025-09-13T00:23:39.289132244Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 13 00:23:39.289146 containerd[1594]: time="2025-09-13T00:23:39.289143855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 13 00:23:39.289200 containerd[1594]: time="2025-09-13T00:23:39.289153964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 13 00:23:39.289200 containerd[1594]: time="2025-09-13T00:23:39.289164584Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 13 00:23:39.289200 containerd[1594]: time="2025-09-13T00:23:39.289184932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 13 00:23:39.289200 containerd[1594]: time="2025-09-13T00:23:39.289194330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 13 00:23:39.289276 containerd[1594]: time="2025-09-13T00:23:39.289204890Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 13 00:23:39.289276 containerd[1594]: time="2025-09-13T00:23:39.289248832Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 00:23:39.289276 containerd[1594]: time="2025-09-13T00:23:39.289260815Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 13 00:23:39.289276 containerd[1594]: time="2025-09-13T00:23:39.289269902Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 00:23:39.289352 containerd[1594]: time="2025-09-13T00:23:39.289278508Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 13 00:23:39.289352 containerd[1594]: time="2025-09-13T00:23:39.289286964Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 13 00:23:39.289392 containerd[1594]: time="2025-09-13T00:23:39.289356043Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 13 00:23:39.289392 containerd[1594]: time="2025-09-13T00:23:39.289368036Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 13 00:23:39.289392 containerd[1594]: time="2025-09-13T00:23:39.289385268Z" level=info msg="runtime interface created" Sep 13 00:23:39.289392 containerd[1594]: time="2025-09-13T00:23:39.289390778Z" level=info msg="created NRI interface" Sep 13 00:23:39.289461 containerd[1594]: time="2025-09-13T00:23:39.289402951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 13 00:23:39.289461 containerd[1594]: time="2025-09-13T00:23:39.289414523Z" level=info msg="Connect containerd service" Sep 13 00:23:39.289461 containerd[1594]: time="2025-09-13T00:23:39.289434671Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:23:39.290574 containerd[1594]: time="2025-09-13T00:23:39.290545895Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:23:39.301396 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:23:39.304742 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:23:39.325194 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:23:39.325511 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:23:39.329636 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:23:39.360388 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:23:39.364209 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:23:39.368519 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:23:39.369867 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:23:39.397867 containerd[1594]: time="2025-09-13T00:23:39.397731175Z" level=info msg="Start subscribing containerd event" Sep 13 00:23:39.398179 containerd[1594]: time="2025-09-13T00:23:39.397953983Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:23:39.398415 containerd[1594]: time="2025-09-13T00:23:39.398260718Z" level=info msg="Start recovering state" Sep 13 00:23:39.398548 containerd[1594]: time="2025-09-13T00:23:39.398261891Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:23:39.398597 containerd[1594]: time="2025-09-13T00:23:39.398551003Z" level=info msg="Start event monitor" Sep 13 00:23:39.398597 containerd[1594]: time="2025-09-13T00:23:39.398570389Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:23:39.398742 containerd[1594]: time="2025-09-13T00:23:39.398719318Z" level=info msg="Start streaming server" Sep 13 00:23:39.398821 containerd[1594]: time="2025-09-13T00:23:39.398777347Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 13 00:23:39.398821 containerd[1594]: time="2025-09-13T00:23:39.398808015Z" level=info msg="runtime interface starting up..." Sep 13 00:23:39.398821 containerd[1594]: time="2025-09-13T00:23:39.398814597Z" level=info msg="starting plugins..." Sep 13 00:23:39.398901 containerd[1594]: time="2025-09-13T00:23:39.398834204Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 13 00:23:39.399131 containerd[1594]: time="2025-09-13T00:23:39.399085084Z" level=info msg="containerd successfully booted in 0.237153s" Sep 13 00:23:39.399400 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:23:39.551014 systemd-networkd[1503]: eth0: Gained IPv6LL Sep 13 00:23:39.554484 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:23:39.556291 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:23:39.559091 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:23:39.561608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:23:39.566009 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:23:39.598470 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:23:39.608489 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:23:39.608806 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:23:39.610469 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:23:39.616154 tar[1591]: linux-amd64/README.md Sep 13 00:23:39.637680 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:23:40.303081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:23:40.304704 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:23:40.306141 systemd[1]: Startup finished in 3.045s (kernel) + 6.390s (initrd) + 3.925s (userspace) = 13.362s. Sep 13 00:23:40.339289 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:23:40.747732 kubelet[1699]: E0913 00:23:40.747589 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:23:40.751483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:23:40.751725 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:23:40.752222 systemd[1]: kubelet.service: Consumed 978ms CPU time, 265.1M memory peak. Sep 13 00:23:43.704144 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:23:43.705968 systemd[1]: Started sshd@0-10.0.0.78:22-10.0.0.1:39586.service - OpenSSH per-connection server daemon (10.0.0.1:39586). Sep 13 00:23:43.786912 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 39586 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:23:43.789320 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:43.797136 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:23:43.798528 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:23:43.805891 systemd-logind[1582]: New session 1 of user core. Sep 13 00:23:43.824234 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:23:43.827857 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:23:43.842576 (systemd)[1717]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:23:43.845177 systemd-logind[1582]: New session c1 of user core. Sep 13 00:23:44.023565 systemd[1717]: Queued start job for default target default.target. Sep 13 00:23:44.041394 systemd[1717]: Created slice app.slice - User Application Slice. Sep 13 00:23:44.041425 systemd[1717]: Reached target paths.target - Paths. Sep 13 00:23:44.041481 systemd[1717]: Reached target timers.target - Timers. Sep 13 00:23:44.043299 systemd[1717]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:23:44.056500 systemd[1717]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:23:44.056686 systemd[1717]: Reached target sockets.target - Sockets. Sep 13 00:23:44.056748 systemd[1717]: Reached target basic.target - Basic System. Sep 13 00:23:44.056823 systemd[1717]: Reached target default.target - Main User Target. Sep 13 00:23:44.056879 systemd[1717]: Startup finished in 203ms. Sep 13 00:23:44.057228 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:23:44.059346 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:23:44.129896 systemd[1]: Started sshd@1-10.0.0.78:22-10.0.0.1:39588.service - OpenSSH per-connection server daemon (10.0.0.1:39588). Sep 13 00:23:44.189824 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 39588 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:23:44.191847 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:44.196812 systemd-logind[1582]: New session 2 of user core. Sep 13 00:23:44.210930 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:23:44.265641 sshd[1731]: Connection closed by 10.0.0.1 port 39588 Sep 13 00:23:44.265912 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:44.274725 systemd[1]: sshd@1-10.0.0.78:22-10.0.0.1:39588.service: Deactivated successfully. Sep 13 00:23:44.276995 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:23:44.278017 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:23:44.281220 systemd[1]: Started sshd@2-10.0.0.78:22-10.0.0.1:39590.service - OpenSSH per-connection server daemon (10.0.0.1:39590). Sep 13 00:23:44.282005 systemd-logind[1582]: Removed session 2. Sep 13 00:23:44.333494 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 39590 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:23:44.335505 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:44.340708 systemd-logind[1582]: New session 3 of user core. Sep 13 00:23:44.354097 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:23:44.405102 sshd[1739]: Connection closed by 10.0.0.1 port 39590 Sep 13 00:23:44.405415 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:44.417307 systemd[1]: sshd@2-10.0.0.78:22-10.0.0.1:39590.service: Deactivated successfully. Sep 13 00:23:44.419056 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:23:44.419754 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:23:44.422344 systemd[1]: Started sshd@3-10.0.0.78:22-10.0.0.1:39594.service - OpenSSH per-connection server daemon (10.0.0.1:39594). Sep 13 00:23:44.423150 systemd-logind[1582]: Removed session 3. Sep 13 00:23:44.473635 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 39594 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:23:44.475033 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:44.479366 systemd-logind[1582]: New session 4 of user core. Sep 13 00:23:44.488920 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:23:44.542947 sshd[1747]: Connection closed by 10.0.0.1 port 39594 Sep 13 00:23:44.543263 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:44.555391 systemd[1]: sshd@3-10.0.0.78:22-10.0.0.1:39594.service: Deactivated successfully. Sep 13 00:23:44.557415 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:23:44.558198 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:23:44.561677 systemd[1]: Started sshd@4-10.0.0.78:22-10.0.0.1:39604.service - OpenSSH per-connection server daemon (10.0.0.1:39604). Sep 13 00:23:44.562252 systemd-logind[1582]: Removed session 4. Sep 13 00:23:44.609639 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 39604 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:23:44.611206 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:44.615698 systemd-logind[1582]: New session 5 of user core. Sep 13 00:23:44.625918 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:23:44.686218 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:23:44.686575 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:44.704693 sudo[1756]: pam_unix(sudo:session): session closed for user root Sep 13 00:23:44.706757 sshd[1755]: Connection closed by 10.0.0.1 port 39604 Sep 13 00:23:44.707170 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:44.715493 systemd[1]: sshd@4-10.0.0.78:22-10.0.0.1:39604.service: Deactivated successfully. Sep 13 00:23:44.717316 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:23:44.718182 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:23:44.721183 systemd[1]: Started sshd@5-10.0.0.78:22-10.0.0.1:39620.service - OpenSSH per-connection server daemon (10.0.0.1:39620). Sep 13 00:23:44.721847 systemd-logind[1582]: Removed session 5. Sep 13 00:23:44.772828 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 39620 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:23:44.774531 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:44.779647 systemd-logind[1582]: New session 6 of user core. Sep 13 00:23:44.788979 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:23:44.844118 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:23:44.844429 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:45.455353 sudo[1766]: pam_unix(sudo:session): session closed for user root Sep 13 00:23:45.462692 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 13 00:23:45.463059 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:45.475260 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 13 00:23:45.523256 augenrules[1788]: No rules Sep 13 00:23:45.525259 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:23:45.525587 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 13 00:23:45.527066 sudo[1765]: pam_unix(sudo:session): session closed for user root Sep 13 00:23:45.529055 sshd[1764]: Connection closed by 10.0.0.1 port 39620 Sep 13 00:23:45.529361 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Sep 13 00:23:45.542557 systemd[1]: sshd@5-10.0.0.78:22-10.0.0.1:39620.service: Deactivated successfully. Sep 13 00:23:45.544876 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:23:45.545676 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:23:45.549009 systemd[1]: Started sshd@6-10.0.0.78:22-10.0.0.1:39624.service - OpenSSH per-connection server daemon (10.0.0.1:39624). Sep 13 00:23:45.549680 systemd-logind[1582]: Removed session 6. Sep 13 00:23:45.604887 sshd[1797]: Accepted publickey for core from 10.0.0.1 port 39624 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:23:45.606624 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:23:45.611524 systemd-logind[1582]: New session 7 of user core. Sep 13 00:23:45.622045 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:23:45.677286 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:23:45.677625 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:23:47.095986 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1111694160 wd_nsec: 1111693762 Sep 13 00:23:47.619581 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:23:47.637310 (dockerd)[1822]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:23:48.299339 dockerd[1822]: time="2025-09-13T00:23:48.299224593Z" level=info msg="Starting up" Sep 13 00:23:48.361726 dockerd[1822]: time="2025-09-13T00:23:48.361634193Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 13 00:23:48.463935 dockerd[1822]: time="2025-09-13T00:23:48.463869421Z" level=info msg="Loading containers: start." Sep 13 00:23:48.477820 kernel: Initializing XFRM netlink socket Sep 13 00:23:48.756140 systemd-networkd[1503]: docker0: Link UP Sep 13 00:23:48.762596 dockerd[1822]: time="2025-09-13T00:23:48.762524506Z" level=info msg="Loading containers: done." Sep 13 00:23:48.777588 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2075520381-merged.mount: Deactivated successfully. Sep 13 00:23:48.780593 dockerd[1822]: time="2025-09-13T00:23:48.780527618Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:23:48.780663 dockerd[1822]: time="2025-09-13T00:23:48.780646331Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 13 00:23:48.780857 dockerd[1822]: time="2025-09-13T00:23:48.780836457Z" level=info msg="Initializing buildkit" Sep 13 00:23:48.812837 dockerd[1822]: time="2025-09-13T00:23:48.812744927Z" level=info msg="Completed buildkit initialization" Sep 13 00:23:48.819367 dockerd[1822]: time="2025-09-13T00:23:48.819274402Z" level=info msg="Daemon has completed initialization" Sep 13 00:23:48.819534 dockerd[1822]: time="2025-09-13T00:23:48.819452065Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:23:48.819669 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:23:49.885123 containerd[1594]: time="2025-09-13T00:23:49.885052096Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:23:50.968043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:23:50.969969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:23:50.981934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3266939904.mount: Deactivated successfully. Sep 13 00:23:51.235289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:23:51.262311 (kubelet)[2048]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:23:51.497849 kubelet[2048]: E0913 00:23:51.497697 2048 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:23:51.504932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:23:51.505221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:23:51.505752 systemd[1]: kubelet.service: Consumed 465ms CPU time, 111.6M memory peak. Sep 13 00:23:53.027569 containerd[1594]: time="2025-09-13T00:23:53.027494697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:53.028384 containerd[1594]: time="2025-09-13T00:23:53.028351183Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 13 00:23:53.029731 containerd[1594]: time="2025-09-13T00:23:53.029692859Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:53.032197 containerd[1594]: time="2025-09-13T00:23:53.032148635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:53.032980 containerd[1594]: time="2025-09-13T00:23:53.032943546Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.147842909s" Sep 13 00:23:53.033021 containerd[1594]: time="2025-09-13T00:23:53.032980896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:23:53.033703 containerd[1594]: time="2025-09-13T00:23:53.033636305Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:23:54.345707 containerd[1594]: time="2025-09-13T00:23:54.345595216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:54.346532 containerd[1594]: time="2025-09-13T00:23:54.346512587Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 13 00:23:54.348070 containerd[1594]: time="2025-09-13T00:23:54.348036013Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:54.350637 containerd[1594]: time="2025-09-13T00:23:54.350592508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:54.351582 containerd[1594]: time="2025-09-13T00:23:54.351535727Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.317874054s" Sep 13 00:23:54.351637 containerd[1594]: time="2025-09-13T00:23:54.351585149Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:23:54.352285 containerd[1594]: time="2025-09-13T00:23:54.352261868Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:23:55.820652 containerd[1594]: time="2025-09-13T00:23:55.820578226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:55.821815 containerd[1594]: time="2025-09-13T00:23:55.821753219Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 13 00:23:55.823453 containerd[1594]: time="2025-09-13T00:23:55.823402162Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:55.826555 containerd[1594]: time="2025-09-13T00:23:55.826509870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:55.827629 containerd[1594]: time="2025-09-13T00:23:55.827588803Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.47529815s" Sep 13 00:23:55.827629 containerd[1594]: time="2025-09-13T00:23:55.827625802Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:23:55.828276 containerd[1594]: time="2025-09-13T00:23:55.828249472Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:23:56.942343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073340560.mount: Deactivated successfully. Sep 13 00:23:57.232797 containerd[1594]: time="2025-09-13T00:23:57.232634816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:57.233686 containerd[1594]: time="2025-09-13T00:23:57.233659007Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 13 00:23:57.235088 containerd[1594]: time="2025-09-13T00:23:57.235031941Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:57.236871 containerd[1594]: time="2025-09-13T00:23:57.236837007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:57.237391 containerd[1594]: time="2025-09-13T00:23:57.237366009Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.409089787s" Sep 13 00:23:57.237434 containerd[1594]: time="2025-09-13T00:23:57.237395524Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:23:57.238253 containerd[1594]: time="2025-09-13T00:23:57.238218928Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:23:57.924911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825689368.mount: Deactivated successfully. Sep 13 00:23:58.814211 containerd[1594]: time="2025-09-13T00:23:58.814086332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:58.815179 containerd[1594]: time="2025-09-13T00:23:58.815109040Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:23:58.816213 containerd[1594]: time="2025-09-13T00:23:58.816169569Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:58.819071 containerd[1594]: time="2025-09-13T00:23:58.819017029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:23:58.820051 containerd[1594]: time="2025-09-13T00:23:58.820003579Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.581758702s" Sep 13 00:23:58.820051 containerd[1594]: time="2025-09-13T00:23:58.820047071Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:23:58.820966 containerd[1594]: time="2025-09-13T00:23:58.820941879Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:23:59.667158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount100220288.mount: Deactivated successfully. Sep 13 00:23:59.673148 containerd[1594]: time="2025-09-13T00:23:59.673074582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:23:59.673853 containerd[1594]: time="2025-09-13T00:23:59.673757833Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:23:59.675202 containerd[1594]: time="2025-09-13T00:23:59.675143822Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:23:59.677599 containerd[1594]: time="2025-09-13T00:23:59.677541779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:23:59.678368 containerd[1594]: time="2025-09-13T00:23:59.678300422Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 857.328698ms" Sep 13 00:23:59.678368 containerd[1594]: time="2025-09-13T00:23:59.678345627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:23:59.679035 containerd[1594]: time="2025-09-13T00:23:59.678982391Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:24:00.553460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585628474.mount: Deactivated successfully. Sep 13 00:24:01.755549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:24:01.757966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:24:02.566300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:24:02.571499 (kubelet)[2240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:24:02.610502 kubelet[2240]: E0913 00:24:02.610437 2240 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:24:02.614862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:24:02.615088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:24:02.615467 systemd[1]: kubelet.service: Consumed 235ms CPU time, 110.6M memory peak. Sep 13 00:24:02.937497 containerd[1594]: time="2025-09-13T00:24:02.937357858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:02.938668 containerd[1594]: time="2025-09-13T00:24:02.938630625Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 13 00:24:02.940934 containerd[1594]: time="2025-09-13T00:24:02.940888960Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:02.944079 containerd[1594]: time="2025-09-13T00:24:02.944027977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:02.945289 containerd[1594]: time="2025-09-13T00:24:02.945252413Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.266223074s" Sep 13 00:24:02.945344 containerd[1594]: time="2025-09-13T00:24:02.945289523Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:24:04.952864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:24:04.953108 systemd[1]: kubelet.service: Consumed 235ms CPU time, 110.6M memory peak. Sep 13 00:24:04.955659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:24:04.981492 systemd[1]: Reload requested from client PID 2281 ('systemctl') (unit session-7.scope)... Sep 13 00:24:04.981524 systemd[1]: Reloading... Sep 13 00:24:05.081834 zram_generator::config[2329]: No configuration found. Sep 13 00:24:05.529613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:24:05.650752 systemd[1]: Reloading finished in 668 ms. Sep 13 00:24:05.721744 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:24:05.721923 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:24:05.722364 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:24:05.722437 systemd[1]: kubelet.service: Consumed 172ms CPU time, 98.3M memory peak. Sep 13 00:24:05.724868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:24:05.963895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:24:05.974222 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:24:06.017780 kubelet[2371]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:24:06.017780 kubelet[2371]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:24:06.017780 kubelet[2371]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:24:06.018257 kubelet[2371]: I0913 00:24:06.017872 2371 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:24:06.218903 kubelet[2371]: I0913 00:24:06.218738 2371 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:24:06.218903 kubelet[2371]: I0913 00:24:06.218800 2371 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:24:06.219160 kubelet[2371]: I0913 00:24:06.219117 2371 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:24:06.239899 kubelet[2371]: E0913 00:24:06.239855 2371 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:24:06.240848 kubelet[2371]: I0913 00:24:06.240803 2371 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:24:06.248526 kubelet[2371]: I0913 00:24:06.248490 2371 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 00:24:06.255408 kubelet[2371]: I0913 00:24:06.255356 2371 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:24:06.256619 kubelet[2371]: I0913 00:24:06.256569 2371 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:24:06.256800 kubelet[2371]: I0913 00:24:06.256603 2371 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:24:06.256800 kubelet[2371]: I0913 00:24:06.256779 2371 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:24:06.257158 kubelet[2371]: I0913 00:24:06.256804 2371 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:24:06.257158 kubelet[2371]: I0913 00:24:06.256995 2371 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:24:06.259688 kubelet[2371]: I0913 00:24:06.259645 2371 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:24:06.259688 kubelet[2371]: I0913 00:24:06.259682 2371 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:24:06.259756 kubelet[2371]: I0913 00:24:06.259714 2371 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:24:06.259756 kubelet[2371]: I0913 00:24:06.259725 2371 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:24:06.262653 kubelet[2371]: I0913 00:24:06.262622 2371 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 13 00:24:06.263541 kubelet[2371]: I0913 00:24:06.262990 2371 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:24:06.264187 kubelet[2371]: W0913 00:24:06.264158 2371 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:24:06.265909 kubelet[2371]: I0913 00:24:06.265881 2371 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:24:06.265909 kubelet[2371]: I0913 00:24:06.265910 2371 server.go:1287] "Started kubelet" Sep 13 00:24:06.266632 kubelet[2371]: W0913 00:24:06.266558 2371 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:24:06.266632 kubelet[2371]: E0913 00:24:06.266628 2371 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:24:06.266776 kubelet[2371]: W0913 00:24:06.266669 2371 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:24:06.266776 kubelet[2371]: E0913 00:24:06.266731 2371 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:24:06.266776 kubelet[2371]: I0913 00:24:06.266729 2371 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:24:06.267925 kubelet[2371]: I0913 00:24:06.267903 2371 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:24:06.270203 kubelet[2371]: I0913 00:24:06.269420 2371 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:24:06.270203 kubelet[2371]: I0913 00:24:06.269996 2371 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:24:06.270281 kubelet[2371]: I0913 00:24:06.270259 2371 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:24:06.270908 kubelet[2371]: I0913 00:24:06.270884 2371 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:24:06.271008 kubelet[2371]: I0913 00:24:06.270983 2371 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:24:06.272341 kubelet[2371]: E0913 00:24:06.272175 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:06.272391 kubelet[2371]: E0913 00:24:06.272361 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="200ms" Sep 13 00:24:06.272495 kubelet[2371]: E0913 00:24:06.271224 2371 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.78:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.78:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864afcc3a323b91 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:24:06.265895825 +0000 UTC m=+0.287397214,LastTimestamp:2025-09-13 00:24:06.265895825 +0000 UTC m=+0.287397214,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:24:06.273100 kubelet[2371]: W0913 00:24:06.273053 2371 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:24:06.273150 kubelet[2371]: E0913 00:24:06.273112 2371 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:24:06.273436 kubelet[2371]: I0913 00:24:06.273184 2371 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:24:06.273436 kubelet[2371]: I0913 00:24:06.273401 2371 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:24:06.273580 kubelet[2371]: E0913 00:24:06.273548 2371 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:24:06.274092 kubelet[2371]: I0913 00:24:06.274073 2371 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:24:06.274158 kubelet[2371]: I0913 00:24:06.274148 2371 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:24:06.274296 kubelet[2371]: I0913 00:24:06.274276 2371 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:24:06.291550 kubelet[2371]: I0913 00:24:06.291490 2371 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:24:06.291689 kubelet[2371]: I0913 00:24:06.291677 2371 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:24:06.291748 kubelet[2371]: I0913 00:24:06.291738 2371 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:24:06.293100 kubelet[2371]: I0913 00:24:06.292973 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:24:06.294869 kubelet[2371]: I0913 00:24:06.294704 2371 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:24:06.294869 kubelet[2371]: I0913 00:24:06.294734 2371 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:24:06.294869 kubelet[2371]: I0913 00:24:06.294753 2371 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:24:06.294869 kubelet[2371]: I0913 00:24:06.294759 2371 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:24:06.295083 kubelet[2371]: E0913 00:24:06.295059 2371 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:24:06.373335 kubelet[2371]: E0913 00:24:06.373275 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:06.395624 kubelet[2371]: E0913 00:24:06.395574 2371 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:24:06.473483 kubelet[2371]: E0913 00:24:06.473373 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:06.473483 kubelet[2371]: E0913 00:24:06.473397 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="400ms" Sep 13 00:24:06.573892 kubelet[2371]: E0913 00:24:06.573841 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:06.596110 kubelet[2371]: E0913 00:24:06.596050 2371 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:24:06.674417 kubelet[2371]: E0913 00:24:06.674348 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:06.775640 kubelet[2371]: E0913 00:24:06.775477 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:06.833222 kubelet[2371]: I0913 00:24:06.833169 2371 policy_none.go:49] "None policy: Start" Sep 13 00:24:06.833222 kubelet[2371]: I0913 00:24:06.833206 2371 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:24:06.833222 kubelet[2371]: I0913 00:24:06.833219 2371 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:24:06.834097 kubelet[2371]: W0913 00:24:06.834008 2371 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:24:06.834097 kubelet[2371]: E0913 00:24:06.834076 2371 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:24:06.841623 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:24:06.860906 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:24:06.875037 kubelet[2371]: E0913 00:24:06.874953 2371 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.78:6443: connect: connection refused" interval="800ms" Sep 13 00:24:06.875864 kubelet[2371]: E0913 00:24:06.875832 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:06.879018 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:24:06.880467 kubelet[2371]: I0913 00:24:06.880429 2371 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:24:06.880761 kubelet[2371]: I0913 00:24:06.880744 2371 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:24:06.880826 kubelet[2371]: I0913 00:24:06.880760 2371 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:24:06.881105 kubelet[2371]: I0913 00:24:06.881078 2371 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:24:06.882038 kubelet[2371]: E0913 00:24:06.882001 2371 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:24:06.882185 kubelet[2371]: E0913 00:24:06.882053 2371 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:24:06.983107 kubelet[2371]: I0913 00:24:06.982733 2371 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:24:06.983318 kubelet[2371]: E0913 00:24:06.983261 2371 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Sep 13 00:24:07.007087 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 13 00:24:07.029408 kubelet[2371]: E0913 00:24:07.029261 2371 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:24:07.032526 systemd[1]: Created slice kubepods-burstable-pod44743d6c951adf6a061fa87711344e87.slice - libcontainer container kubepods-burstable-pod44743d6c951adf6a061fa87711344e87.slice. Sep 13 00:24:07.034735 kubelet[2371]: E0913 00:24:07.034703 2371 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:24:07.037654 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 13 00:24:07.039399 kubelet[2371]: E0913 00:24:07.039369 2371 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:24:07.076868 kubelet[2371]: I0913 00:24:07.076817 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:07.076939 kubelet[2371]: I0913 00:24:07.076864 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:07.076939 kubelet[2371]: I0913 00:24:07.076890 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:07.076939 kubelet[2371]: I0913 00:24:07.076918 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:07.076939 kubelet[2371]: I0913 00:24:07.076937 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:24:07.077036 kubelet[2371]: I0913 00:24:07.076983 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44743d6c951adf6a061fa87711344e87-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"44743d6c951adf6a061fa87711344e87\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:07.077068 kubelet[2371]: I0913 00:24:07.077044 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44743d6c951adf6a061fa87711344e87-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"44743d6c951adf6a061fa87711344e87\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:07.077097 kubelet[2371]: I0913 00:24:07.077077 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:07.077123 kubelet[2371]: I0913 00:24:07.077098 2371 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44743d6c951adf6a061fa87711344e87-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"44743d6c951adf6a061fa87711344e87\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:07.184749 kubelet[2371]: I0913 00:24:07.184715 2371 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:24:07.185119 kubelet[2371]: E0913 00:24:07.185080 2371 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Sep 13 00:24:07.208625 kubelet[2371]: W0913 00:24:07.208570 2371 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:24:07.208625 kubelet[2371]: E0913 00:24:07.208619 2371 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:24:07.330007 kubelet[2371]: E0913 00:24:07.329873 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:07.330738 containerd[1594]: time="2025-09-13T00:24:07.330688501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 13 00:24:07.335944 kubelet[2371]: E0913 00:24:07.335913 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:07.336403 containerd[1594]: time="2025-09-13T00:24:07.336338777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:44743d6c951adf6a061fa87711344e87,Namespace:kube-system,Attempt:0,}" Sep 13 00:24:07.340630 kubelet[2371]: E0913 00:24:07.340607 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:07.340950 containerd[1594]: time="2025-09-13T00:24:07.340920209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 13 00:24:07.363308 containerd[1594]: time="2025-09-13T00:24:07.363212325Z" level=info msg="connecting to shim 813f995ed2ce797393d1a562bd60b9ca3b3213b73718f207b8879d950a629edd" address="unix:///run/containerd/s/7c6f2b58ee3db8efc37a4359d506a14bc1ace3de72a06cb3ba40168156764219" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:24:07.379972 containerd[1594]: time="2025-09-13T00:24:07.379917152Z" level=info msg="connecting to shim f5b53ae3f01dc1033663c9d37145fa1e76410c2b16b45b77a90e3a093625c98f" address="unix:///run/containerd/s/b5da4712bce583e5199d3d6bd28cfe621d00ee70e822f0f65ed5e2d4d44fcc71" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:24:07.386823 containerd[1594]: time="2025-09-13T00:24:07.386462968Z" level=info msg="connecting to shim b517ce2ae07cc1baf3c98c41c981421372c6aae85e9a5b9c4698c685c9b31ae8" address="unix:///run/containerd/s/9430c9eb7f5300a0f68e2e596ef19bffdc006595efacd2470302c0db87b626ee" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:24:07.400021 systemd[1]: Started cri-containerd-813f995ed2ce797393d1a562bd60b9ca3b3213b73718f207b8879d950a629edd.scope - libcontainer container 813f995ed2ce797393d1a562bd60b9ca3b3213b73718f207b8879d950a629edd. Sep 13 00:24:07.403747 systemd[1]: Started cri-containerd-f5b53ae3f01dc1033663c9d37145fa1e76410c2b16b45b77a90e3a093625c98f.scope - libcontainer container f5b53ae3f01dc1033663c9d37145fa1e76410c2b16b45b77a90e3a093625c98f. Sep 13 00:24:07.419925 systemd[1]: Started cri-containerd-b517ce2ae07cc1baf3c98c41c981421372c6aae85e9a5b9c4698c685c9b31ae8.scope - libcontainer container b517ce2ae07cc1baf3c98c41c981421372c6aae85e9a5b9c4698c685c9b31ae8. Sep 13 00:24:07.427344 kubelet[2371]: W0913 00:24:07.427242 2371 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:24:07.427344 kubelet[2371]: E0913 00:24:07.427304 2371 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:24:07.462431 containerd[1594]: time="2025-09-13T00:24:07.462319296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:44743d6c951adf6a061fa87711344e87,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5b53ae3f01dc1033663c9d37145fa1e76410c2b16b45b77a90e3a093625c98f\"" Sep 13 00:24:07.463321 containerd[1594]: time="2025-09-13T00:24:07.463285608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"813f995ed2ce797393d1a562bd60b9ca3b3213b73718f207b8879d950a629edd\"" Sep 13 00:24:07.463745 kubelet[2371]: E0913 00:24:07.463719 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:07.463983 kubelet[2371]: E0913 00:24:07.463965 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:07.466549 containerd[1594]: time="2025-09-13T00:24:07.466512369Z" level=info msg="CreateContainer within sandbox \"f5b53ae3f01dc1033663c9d37145fa1e76410c2b16b45b77a90e3a093625c98f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:24:07.467422 containerd[1594]: time="2025-09-13T00:24:07.467340893Z" level=info msg="CreateContainer within sandbox \"813f995ed2ce797393d1a562bd60b9ca3b3213b73718f207b8879d950a629edd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:24:07.478834 containerd[1594]: time="2025-09-13T00:24:07.478770307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"b517ce2ae07cc1baf3c98c41c981421372c6aae85e9a5b9c4698c685c9b31ae8\"" Sep 13 00:24:07.479133 kubelet[2371]: W0913 00:24:07.479075 2371 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.78:6443: connect: connection refused Sep 13 00:24:07.479224 kubelet[2371]: E0913 00:24:07.479144 2371 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.78:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:24:07.479512 kubelet[2371]: E0913 00:24:07.479485 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:07.481444 containerd[1594]: time="2025-09-13T00:24:07.481227115Z" level=info msg="CreateContainer within sandbox \"b517ce2ae07cc1baf3c98c41c981421372c6aae85e9a5b9c4698c685c9b31ae8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:24:07.482645 containerd[1594]: time="2025-09-13T00:24:07.482617983Z" level=info msg="Container ef0919dbc70acbfd568ae22a029e69d378f2d4a64bdb016c66bcc682eaaf517f: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:24:07.483627 containerd[1594]: time="2025-09-13T00:24:07.483606106Z" level=info msg="Container c4d2d92daec92f224c9a1fe562dc119f212db941ed5e2a2e8dfe621c3a356fc1: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:24:07.493826 containerd[1594]: time="2025-09-13T00:24:07.493770698Z" level=info msg="CreateContainer within sandbox \"f5b53ae3f01dc1033663c9d37145fa1e76410c2b16b45b77a90e3a093625c98f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c4d2d92daec92f224c9a1fe562dc119f212db941ed5e2a2e8dfe621c3a356fc1\"" Sep 13 00:24:07.494317 containerd[1594]: time="2025-09-13T00:24:07.494293238Z" level=info msg="StartContainer for \"c4d2d92daec92f224c9a1fe562dc119f212db941ed5e2a2e8dfe621c3a356fc1\"" Sep 13 00:24:07.495285 containerd[1594]: time="2025-09-13T00:24:07.495259711Z" level=info msg="connecting to shim c4d2d92daec92f224c9a1fe562dc119f212db941ed5e2a2e8dfe621c3a356fc1" address="unix:///run/containerd/s/b5da4712bce583e5199d3d6bd28cfe621d00ee70e822f0f65ed5e2d4d44fcc71" protocol=ttrpc version=3 Sep 13 00:24:07.495737 containerd[1594]: time="2025-09-13T00:24:07.495711799Z" level=info msg="CreateContainer within sandbox \"813f995ed2ce797393d1a562bd60b9ca3b3213b73718f207b8879d950a629edd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef0919dbc70acbfd568ae22a029e69d378f2d4a64bdb016c66bcc682eaaf517f\"" Sep 13 00:24:07.496535 containerd[1594]: time="2025-09-13T00:24:07.496507982Z" level=info msg="StartContainer for \"ef0919dbc70acbfd568ae22a029e69d378f2d4a64bdb016c66bcc682eaaf517f\"" Sep 13 00:24:07.496691 containerd[1594]: time="2025-09-13T00:24:07.496668863Z" level=info msg="Container 4cf01c27133495d7e926bb989f895dc4751308ef6ad825ca24ee2300a6b6bd18: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:24:07.497477 containerd[1594]: time="2025-09-13T00:24:07.497454868Z" level=info msg="connecting to shim ef0919dbc70acbfd568ae22a029e69d378f2d4a64bdb016c66bcc682eaaf517f" address="unix:///run/containerd/s/7c6f2b58ee3db8efc37a4359d506a14bc1ace3de72a06cb3ba40168156764219" protocol=ttrpc version=3 Sep 13 00:24:07.504774 containerd[1594]: time="2025-09-13T00:24:07.504742474Z" level=info msg="CreateContainer within sandbox \"b517ce2ae07cc1baf3c98c41c981421372c6aae85e9a5b9c4698c685c9b31ae8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4cf01c27133495d7e926bb989f895dc4751308ef6ad825ca24ee2300a6b6bd18\"" Sep 13 00:24:07.505477 containerd[1594]: time="2025-09-13T00:24:07.505456103Z" level=info msg="StartContainer for \"4cf01c27133495d7e926bb989f895dc4751308ef6ad825ca24ee2300a6b6bd18\"" Sep 13 00:24:07.506687 containerd[1594]: time="2025-09-13T00:24:07.506665902Z" level=info msg="connecting to shim 4cf01c27133495d7e926bb989f895dc4751308ef6ad825ca24ee2300a6b6bd18" address="unix:///run/containerd/s/9430c9eb7f5300a0f68e2e596ef19bffdc006595efacd2470302c0db87b626ee" protocol=ttrpc version=3 Sep 13 00:24:07.513972 systemd[1]: Started cri-containerd-c4d2d92daec92f224c9a1fe562dc119f212db941ed5e2a2e8dfe621c3a356fc1.scope - libcontainer container c4d2d92daec92f224c9a1fe562dc119f212db941ed5e2a2e8dfe621c3a356fc1. Sep 13 00:24:07.517364 systemd[1]: Started cri-containerd-ef0919dbc70acbfd568ae22a029e69d378f2d4a64bdb016c66bcc682eaaf517f.scope - libcontainer container ef0919dbc70acbfd568ae22a029e69d378f2d4a64bdb016c66bcc682eaaf517f. Sep 13 00:24:07.536927 systemd[1]: Started cri-containerd-4cf01c27133495d7e926bb989f895dc4751308ef6ad825ca24ee2300a6b6bd18.scope - libcontainer container 4cf01c27133495d7e926bb989f895dc4751308ef6ad825ca24ee2300a6b6bd18. Sep 13 00:24:07.580889 containerd[1594]: time="2025-09-13T00:24:07.577772462Z" level=info msg="StartContainer for \"c4d2d92daec92f224c9a1fe562dc119f212db941ed5e2a2e8dfe621c3a356fc1\" returns successfully" Sep 13 00:24:07.587406 kubelet[2371]: I0913 00:24:07.587266 2371 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:24:07.588227 kubelet[2371]: E0913 00:24:07.588186 2371 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.78:6443/api/v1/nodes\": dial tcp 10.0.0.78:6443: connect: connection refused" node="localhost" Sep 13 00:24:07.589293 containerd[1594]: time="2025-09-13T00:24:07.589245177Z" level=info msg="StartContainer for \"ef0919dbc70acbfd568ae22a029e69d378f2d4a64bdb016c66bcc682eaaf517f\" returns successfully" Sep 13 00:24:07.599312 containerd[1594]: time="2025-09-13T00:24:07.599255099Z" level=info msg="StartContainer for \"4cf01c27133495d7e926bb989f895dc4751308ef6ad825ca24ee2300a6b6bd18\" returns successfully" Sep 13 00:24:08.308616 kubelet[2371]: E0913 00:24:08.308577 2371 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:24:08.309040 kubelet[2371]: E0913 00:24:08.308748 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:08.312061 kubelet[2371]: E0913 00:24:08.312022 2371 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:24:08.312225 kubelet[2371]: E0913 00:24:08.312151 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:08.312797 kubelet[2371]: E0913 00:24:08.312775 2371 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 13 00:24:08.312914 kubelet[2371]: E0913 00:24:08.312890 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:08.391817 kubelet[2371]: I0913 00:24:08.391306 2371 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:24:08.657596 kubelet[2371]: E0913 00:24:08.657445 2371 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:24:08.740636 kubelet[2371]: I0913 00:24:08.740586 2371 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:24:08.740636 kubelet[2371]: E0913 00:24:08.740631 2371 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:24:08.747809 kubelet[2371]: E0913 00:24:08.747757 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:08.848824 kubelet[2371]: E0913 00:24:08.848754 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:08.949858 kubelet[2371]: E0913 00:24:08.949603 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:09.050415 kubelet[2371]: E0913 00:24:09.050357 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:09.150475 kubelet[2371]: E0913 00:24:09.150422 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:09.251406 kubelet[2371]: E0913 00:24:09.251270 2371 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:09.273822 kubelet[2371]: I0913 00:24:09.273775 2371 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:09.277904 kubelet[2371]: E0913 00:24:09.277865 2371 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:09.277904 kubelet[2371]: I0913 00:24:09.277888 2371 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:09.279111 kubelet[2371]: E0913 00:24:09.279080 2371 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:09.279111 kubelet[2371]: I0913 00:24:09.279100 2371 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:24:09.280199 kubelet[2371]: E0913 00:24:09.280176 2371 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:24:09.313248 kubelet[2371]: I0913 00:24:09.313222 2371 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:24:09.313635 kubelet[2371]: I0913 00:24:09.313291 2371 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:09.314904 kubelet[2371]: E0913 00:24:09.314873 2371 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 13 00:24:09.315049 kubelet[2371]: E0913 00:24:09.315029 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:09.315402 kubelet[2371]: E0913 00:24:09.315382 2371 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:09.315492 kubelet[2371]: E0913 00:24:09.315478 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:10.268540 kubelet[2371]: I0913 00:24:10.268491 2371 apiserver.go:52] "Watching apiserver" Sep 13 00:24:10.273637 kubelet[2371]: I0913 00:24:10.273607 2371 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:24:10.314351 kubelet[2371]: I0913 00:24:10.314322 2371 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:24:10.320353 kubelet[2371]: E0913 00:24:10.320323 2371 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:10.907041 systemd[1]: Reload requested from client PID 2645 ('systemctl') (unit session-7.scope)... Sep 13 00:24:10.907058 systemd[1]: Reloading... Sep 13 00:24:10.989823 zram_generator::config[2689]: No configuration found. Sep 13 00:24:11.086513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:24:11.238557 systemd[1]: Reloading finished in 331 ms. Sep 13 00:24:11.268621 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:24:11.290381 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:24:11.290819 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:24:11.290887 systemd[1]: kubelet.service: Consumed 768ms CPU time, 130.8M memory peak. Sep 13 00:24:11.293448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:24:11.518897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:24:11.523408 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:24:11.561850 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:24:11.561850 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:24:11.561850 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:24:11.562277 kubelet[2733]: I0913 00:24:11.561905 2733 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:24:11.567868 kubelet[2733]: I0913 00:24:11.567838 2733 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:24:11.567868 kubelet[2733]: I0913 00:24:11.567860 2733 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:24:11.568068 kubelet[2733]: I0913 00:24:11.568055 2733 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:24:11.569090 kubelet[2733]: I0913 00:24:11.569069 2733 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:24:11.571455 kubelet[2733]: I0913 00:24:11.571422 2733 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:24:11.576228 kubelet[2733]: I0913 00:24:11.576196 2733 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 13 00:24:11.581097 kubelet[2733]: I0913 00:24:11.581058 2733 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:24:11.581345 kubelet[2733]: I0913 00:24:11.581310 2733 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:24:11.581487 kubelet[2733]: I0913 00:24:11.581335 2733 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:24:11.581566 kubelet[2733]: I0913 00:24:11.581493 2733 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:24:11.581566 kubelet[2733]: I0913 00:24:11.581501 2733 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:24:11.581566 kubelet[2733]: I0913 00:24:11.581547 2733 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:24:11.581707 kubelet[2733]: I0913 00:24:11.581684 2733 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:24:11.581753 kubelet[2733]: I0913 00:24:11.581716 2733 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:24:11.581753 kubelet[2733]: I0913 00:24:11.581736 2733 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:24:11.581753 kubelet[2733]: I0913 00:24:11.581745 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:24:11.585301 kubelet[2733]: I0913 00:24:11.584914 2733 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 13 00:24:11.585301 kubelet[2733]: I0913 00:24:11.585299 2733 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:24:11.585833 kubelet[2733]: I0913 00:24:11.585783 2733 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:24:11.585893 kubelet[2733]: I0913 00:24:11.585849 2733 server.go:1287] "Started kubelet" Sep 13 00:24:11.586822 kubelet[2733]: I0913 00:24:11.586761 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:24:11.587272 kubelet[2733]: I0913 00:24:11.587254 2733 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:24:11.587392 kubelet[2733]: I0913 00:24:11.587371 2733 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:24:11.588479 kubelet[2733]: I0913 00:24:11.588448 2733 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:24:11.589960 kubelet[2733]: I0913 00:24:11.589935 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:24:11.590621 kubelet[2733]: I0913 00:24:11.590601 2733 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:24:11.592998 kubelet[2733]: I0913 00:24:11.592969 2733 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:24:11.593124 kubelet[2733]: E0913 00:24:11.593100 2733 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:24:11.593334 kubelet[2733]: I0913 00:24:11.593312 2733 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:24:11.593557 kubelet[2733]: I0913 00:24:11.593537 2733 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:24:11.599706 kubelet[2733]: I0913 00:24:11.599670 2733 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:24:11.600416 kubelet[2733]: I0913 00:24:11.600364 2733 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:24:11.602098 kubelet[2733]: E0913 00:24:11.601469 2733 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:24:11.603380 kubelet[2733]: I0913 00:24:11.603359 2733 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:24:11.606973 kubelet[2733]: I0913 00:24:11.606940 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:24:11.608308 kubelet[2733]: I0913 00:24:11.608287 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:24:11.608361 kubelet[2733]: I0913 00:24:11.608315 2733 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:24:11.608361 kubelet[2733]: I0913 00:24:11.608336 2733 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:24:11.608361 kubelet[2733]: I0913 00:24:11.608344 2733 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:24:11.608432 kubelet[2733]: E0913 00:24:11.608397 2733 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:24:11.636011 kubelet[2733]: I0913 00:24:11.635982 2733 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:24:11.636011 kubelet[2733]: I0913 00:24:11.636002 2733 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:24:11.636011 kubelet[2733]: I0913 00:24:11.636024 2733 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:24:11.636245 kubelet[2733]: I0913 00:24:11.636228 2733 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:24:11.636273 kubelet[2733]: I0913 00:24:11.636245 2733 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:24:11.636273 kubelet[2733]: I0913 00:24:11.636265 2733 policy_none.go:49] "None policy: Start" Sep 13 00:24:11.636316 kubelet[2733]: I0913 00:24:11.636275 2733 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:24:11.636316 kubelet[2733]: I0913 00:24:11.636288 2733 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:24:11.636418 kubelet[2733]: I0913 00:24:11.636403 2733 state_mem.go:75] "Updated machine memory state" Sep 13 00:24:11.643888 kubelet[2733]: I0913 00:24:11.643856 2733 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:24:11.644203 kubelet[2733]: I0913 00:24:11.644082 2733 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:24:11.644203 kubelet[2733]: I0913 00:24:11.644102 2733 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:24:11.644323 kubelet[2733]: I0913 00:24:11.644301 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:24:11.645281 kubelet[2733]: E0913 00:24:11.645167 2733 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:24:11.709564 kubelet[2733]: I0913 00:24:11.709498 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:11.709564 kubelet[2733]: I0913 00:24:11.709556 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:11.709765 kubelet[2733]: I0913 00:24:11.709724 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 13 00:24:11.750043 kubelet[2733]: I0913 00:24:11.749990 2733 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 13 00:24:11.794622 kubelet[2733]: I0913 00:24:11.794480 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:11.794622 kubelet[2733]: I0913 00:24:11.794516 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:11.794622 kubelet[2733]: I0913 00:24:11.794541 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:11.794622 kubelet[2733]: I0913 00:24:11.794563 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:11.794622 kubelet[2733]: I0913 00:24:11.794583 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:11.794934 kubelet[2733]: I0913 00:24:11.794604 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:24:11.794934 kubelet[2733]: I0913 00:24:11.794649 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44743d6c951adf6a061fa87711344e87-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"44743d6c951adf6a061fa87711344e87\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:11.794934 kubelet[2733]: I0913 00:24:11.794717 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44743d6c951adf6a061fa87711344e87-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"44743d6c951adf6a061fa87711344e87\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:11.794934 kubelet[2733]: I0913 00:24:11.794765 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44743d6c951adf6a061fa87711344e87-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"44743d6c951adf6a061fa87711344e87\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:24:12.404233 kubelet[2733]: E0913 00:24:12.402181 2733 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 13 00:24:12.404233 kubelet[2733]: E0913 00:24:12.402430 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:12.404233 kubelet[2733]: E0913 00:24:12.402440 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:12.404233 kubelet[2733]: E0913 00:24:12.402617 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:12.404233 kubelet[2733]: I0913 00:24:12.403201 2733 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 13 00:24:12.404233 kubelet[2733]: I0913 00:24:12.403267 2733 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 13 00:24:12.582680 kubelet[2733]: I0913 00:24:12.582621 2733 apiserver.go:52] "Watching apiserver" Sep 13 00:24:12.594286 kubelet[2733]: I0913 00:24:12.594242 2733 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:24:12.622267 kubelet[2733]: I0913 00:24:12.622066 2733 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:12.622267 kubelet[2733]: E0913 00:24:12.622209 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:12.622844 kubelet[2733]: E0913 00:24:12.622823 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:13.174908 kubelet[2733]: E0913 00:24:13.174830 2733 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 13 00:24:13.175347 kubelet[2733]: E0913 00:24:13.175101 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:13.193818 kubelet[2733]: I0913 00:24:13.193708 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.193661691 podStartE2EDuration="2.193661691s" podCreationTimestamp="2025-09-13 00:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:13.193321896 +0000 UTC m=+1.665804811" watchObservedRunningTime="2025-09-13 00:24:13.193661691 +0000 UTC m=+1.666144576" Sep 13 00:24:13.212623 kubelet[2733]: I0913 00:24:13.212379 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.212355531 podStartE2EDuration="2.212355531s" podCreationTimestamp="2025-09-13 00:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:13.202839825 +0000 UTC m=+1.675322750" watchObservedRunningTime="2025-09-13 00:24:13.212355531 +0000 UTC m=+1.684838416" Sep 13 00:24:13.224020 kubelet[2733]: I0913 00:24:13.223955 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.223937148 podStartE2EDuration="3.223937148s" podCreationTimestamp="2025-09-13 00:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:13.213194909 +0000 UTC m=+1.685677784" watchObservedRunningTime="2025-09-13 00:24:13.223937148 +0000 UTC m=+1.696420033" Sep 13 00:24:13.623878 kubelet[2733]: E0913 00:24:13.623705 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:13.623878 kubelet[2733]: E0913 00:24:13.623718 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:13.624304 kubelet[2733]: E0913 00:24:13.624130 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:15.598629 kubelet[2733]: E0913 00:24:15.598576 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:15.601871 kubelet[2733]: E0913 00:24:15.601842 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:15.686946 kubelet[2733]: I0913 00:24:15.686915 2733 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:24:15.687224 containerd[1594]: time="2025-09-13T00:24:15.687189436Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:24:15.687582 kubelet[2733]: I0913 00:24:15.687324 2733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:24:17.778970 kubelet[2733]: E0913 00:24:17.778933 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:18.292537 systemd[1]: Created slice kubepods-besteffort-podf96b409b_66fa_4fba_802c_586b06b90018.slice - libcontainer container kubepods-besteffort-podf96b409b_66fa_4fba_802c_586b06b90018.slice. Sep 13 00:24:18.334026 kubelet[2733]: I0913 00:24:18.333983 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f96b409b-66fa-4fba-802c-586b06b90018-lib-modules\") pod \"kube-proxy-m4r5b\" (UID: \"f96b409b-66fa-4fba-802c-586b06b90018\") " pod="kube-system/kube-proxy-m4r5b" Sep 13 00:24:18.334026 kubelet[2733]: I0913 00:24:18.334017 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f96b409b-66fa-4fba-802c-586b06b90018-xtables-lock\") pod \"kube-proxy-m4r5b\" (UID: \"f96b409b-66fa-4fba-802c-586b06b90018\") " pod="kube-system/kube-proxy-m4r5b" Sep 13 00:24:18.334026 kubelet[2733]: I0913 00:24:18.334034 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f96b409b-66fa-4fba-802c-586b06b90018-kube-proxy\") pod \"kube-proxy-m4r5b\" (UID: \"f96b409b-66fa-4fba-802c-586b06b90018\") " pod="kube-system/kube-proxy-m4r5b" Sep 13 00:24:18.334213 kubelet[2733]: I0913 00:24:18.334171 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdd8s\" (UniqueName: \"kubernetes.io/projected/f96b409b-66fa-4fba-802c-586b06b90018-kube-api-access-tdd8s\") pod \"kube-proxy-m4r5b\" (UID: \"f96b409b-66fa-4fba-802c-586b06b90018\") " pod="kube-system/kube-proxy-m4r5b" Sep 13 00:24:18.631135 kubelet[2733]: E0913 00:24:18.631101 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:18.905874 kubelet[2733]: E0913 00:24:18.905720 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:18.906463 containerd[1594]: time="2025-09-13T00:24:18.906386494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4r5b,Uid:f96b409b-66fa-4fba-802c-586b06b90018,Namespace:kube-system,Attempt:0,}" Sep 13 00:24:19.599828 containerd[1594]: time="2025-09-13T00:24:19.599758143Z" level=info msg="connecting to shim f71ad60de792b1d2120d4bce503393e88bc1c9712f0f64480c9606b97aafa545" address="unix:///run/containerd/s/b6fcf73c624373fd7df7fb0bb480a11dc896a5785b5705e2ef058ed602906460" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:24:19.603606 systemd[1]: Created slice kubepods-besteffort-podf4cb6908_a74b_43e0_bb75_78677879137a.slice - libcontainer container kubepods-besteffort-podf4cb6908_a74b_43e0_bb75_78677879137a.slice. Sep 13 00:24:19.627034 systemd[1]: Started cri-containerd-f71ad60de792b1d2120d4bce503393e88bc1c9712f0f64480c9606b97aafa545.scope - libcontainer container f71ad60de792b1d2120d4bce503393e88bc1c9712f0f64480c9606b97aafa545. Sep 13 00:24:19.644423 kubelet[2733]: I0913 00:24:19.644373 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4cb6908-a74b-43e0-bb75-78677879137a-var-lib-calico\") pod \"tigera-operator-755d956888-nprgq\" (UID: \"f4cb6908-a74b-43e0-bb75-78677879137a\") " pod="tigera-operator/tigera-operator-755d956888-nprgq" Sep 13 00:24:19.644423 kubelet[2733]: I0913 00:24:19.644408 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbms\" (UniqueName: \"kubernetes.io/projected/f4cb6908-a74b-43e0-bb75-78677879137a-kube-api-access-6vbms\") pod \"tigera-operator-755d956888-nprgq\" (UID: \"f4cb6908-a74b-43e0-bb75-78677879137a\") " pod="tigera-operator/tigera-operator-755d956888-nprgq" Sep 13 00:24:19.679690 containerd[1594]: time="2025-09-13T00:24:19.679324937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-m4r5b,Uid:f96b409b-66fa-4fba-802c-586b06b90018,Namespace:kube-system,Attempt:0,} returns sandbox id \"f71ad60de792b1d2120d4bce503393e88bc1c9712f0f64480c9606b97aafa545\"" Sep 13 00:24:19.680778 kubelet[2733]: E0913 00:24:19.680740 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:19.682571 containerd[1594]: time="2025-09-13T00:24:19.682518130Z" level=info msg="CreateContainer within sandbox \"f71ad60de792b1d2120d4bce503393e88bc1c9712f0f64480c9606b97aafa545\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:24:19.696710 containerd[1594]: time="2025-09-13T00:24:19.696651620Z" level=info msg="Container e37e2272dbee2e8b56e4a731eacd3e906044e2e4197f451543bb13e38a1ad75b: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:24:19.706707 containerd[1594]: time="2025-09-13T00:24:19.706657290Z" level=info msg="CreateContainer within sandbox \"f71ad60de792b1d2120d4bce503393e88bc1c9712f0f64480c9606b97aafa545\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e37e2272dbee2e8b56e4a731eacd3e906044e2e4197f451543bb13e38a1ad75b\"" Sep 13 00:24:19.707286 containerd[1594]: time="2025-09-13T00:24:19.707254201Z" level=info msg="StartContainer for \"e37e2272dbee2e8b56e4a731eacd3e906044e2e4197f451543bb13e38a1ad75b\"" Sep 13 00:24:19.708697 containerd[1594]: time="2025-09-13T00:24:19.708655800Z" level=info msg="connecting to shim e37e2272dbee2e8b56e4a731eacd3e906044e2e4197f451543bb13e38a1ad75b" address="unix:///run/containerd/s/b6fcf73c624373fd7df7fb0bb480a11dc896a5785b5705e2ef058ed602906460" protocol=ttrpc version=3 Sep 13 00:24:19.730967 systemd[1]: Started cri-containerd-e37e2272dbee2e8b56e4a731eacd3e906044e2e4197f451543bb13e38a1ad75b.scope - libcontainer container e37e2272dbee2e8b56e4a731eacd3e906044e2e4197f451543bb13e38a1ad75b. Sep 13 00:24:19.780555 containerd[1594]: time="2025-09-13T00:24:19.780508181Z" level=info msg="StartContainer for \"e37e2272dbee2e8b56e4a731eacd3e906044e2e4197f451543bb13e38a1ad75b\" returns successfully" Sep 13 00:24:19.909362 containerd[1594]: time="2025-09-13T00:24:19.909238377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-nprgq,Uid:f4cb6908-a74b-43e0-bb75-78677879137a,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:24:19.932153 containerd[1594]: time="2025-09-13T00:24:19.932109125Z" level=info msg="connecting to shim 5863bfd6b761b4dde6c669af7e6c487600c2862505b55f1c8c5e4472c3b2d3cb" address="unix:///run/containerd/s/36ed62cc432d979b1d63f8a864eee14836d54e5ba087ff233704e62764970fbc" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:24:19.965982 systemd[1]: Started cri-containerd-5863bfd6b761b4dde6c669af7e6c487600c2862505b55f1c8c5e4472c3b2d3cb.scope - libcontainer container 5863bfd6b761b4dde6c669af7e6c487600c2862505b55f1c8c5e4472c3b2d3cb. Sep 13 00:24:20.009473 containerd[1594]: time="2025-09-13T00:24:20.009405872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-nprgq,Uid:f4cb6908-a74b-43e0-bb75-78677879137a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5863bfd6b761b4dde6c669af7e6c487600c2862505b55f1c8c5e4472c3b2d3cb\"" Sep 13 00:24:20.011516 containerd[1594]: time="2025-09-13T00:24:20.011483577Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:24:20.636134 kubelet[2733]: E0913 00:24:20.636094 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:22.306452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1925764511.mount: Deactivated successfully. Sep 13 00:24:23.097049 containerd[1594]: time="2025-09-13T00:24:23.096966967Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:23.097911 containerd[1594]: time="2025-09-13T00:24:23.097865848Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:24:23.099218 containerd[1594]: time="2025-09-13T00:24:23.099178696Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:23.101454 containerd[1594]: time="2025-09-13T00:24:23.101419019Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:23.102168 containerd[1594]: time="2025-09-13T00:24:23.102123781Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.090610608s" Sep 13 00:24:23.102232 containerd[1594]: time="2025-09-13T00:24:23.102168806Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:24:23.104305 containerd[1594]: time="2025-09-13T00:24:23.104257702Z" level=info msg="CreateContainer within sandbox \"5863bfd6b761b4dde6c669af7e6c487600c2862505b55f1c8c5e4472c3b2d3cb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:24:23.113957 containerd[1594]: time="2025-09-13T00:24:23.113905330Z" level=info msg="Container 5828a0c80d11b57c063c8b8aa4c547bd26c2d3bf77a5e923c3f6f26578a1ef49: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:24:23.121006 containerd[1594]: time="2025-09-13T00:24:23.120963371Z" level=info msg="CreateContainer within sandbox \"5863bfd6b761b4dde6c669af7e6c487600c2862505b55f1c8c5e4472c3b2d3cb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5828a0c80d11b57c063c8b8aa4c547bd26c2d3bf77a5e923c3f6f26578a1ef49\"" Sep 13 00:24:23.121721 containerd[1594]: time="2025-09-13T00:24:23.121679444Z" level=info msg="StartContainer for \"5828a0c80d11b57c063c8b8aa4c547bd26c2d3bf77a5e923c3f6f26578a1ef49\"" Sep 13 00:24:23.122947 containerd[1594]: time="2025-09-13T00:24:23.122910937Z" level=info msg="connecting to shim 5828a0c80d11b57c063c8b8aa4c547bd26c2d3bf77a5e923c3f6f26578a1ef49" address="unix:///run/containerd/s/36ed62cc432d979b1d63f8a864eee14836d54e5ba087ff233704e62764970fbc" protocol=ttrpc version=3 Sep 13 00:24:23.177004 systemd[1]: Started cri-containerd-5828a0c80d11b57c063c8b8aa4c547bd26c2d3bf77a5e923c3f6f26578a1ef49.scope - libcontainer container 5828a0c80d11b57c063c8b8aa4c547bd26c2d3bf77a5e923c3f6f26578a1ef49. Sep 13 00:24:23.702395 containerd[1594]: time="2025-09-13T00:24:23.702332764Z" level=info msg="StartContainer for \"5828a0c80d11b57c063c8b8aa4c547bd26c2d3bf77a5e923c3f6f26578a1ef49\" returns successfully" Sep 13 00:24:23.944726 update_engine[1585]: I20250913 00:24:23.944602 1585 update_attempter.cc:509] Updating boot flags... Sep 13 00:24:24.724572 kubelet[2733]: I0913 00:24:24.724495 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m4r5b" podStartSLOduration=7.724477708 podStartE2EDuration="7.724477708s" podCreationTimestamp="2025-09-13 00:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:24:20.921168631 +0000 UTC m=+9.393651506" watchObservedRunningTime="2025-09-13 00:24:24.724477708 +0000 UTC m=+13.196960593" Sep 13 00:24:25.602979 kubelet[2733]: E0913 00:24:25.602939 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:25.614406 kubelet[2733]: E0913 00:24:25.613598 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:25.620098 kubelet[2733]: I0913 00:24:25.620047 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-nprgq" podStartSLOduration=4.527766815 podStartE2EDuration="7.620029701s" podCreationTimestamp="2025-09-13 00:24:18 +0000 UTC" firstStartedPulling="2025-09-13 00:24:20.010693299 +0000 UTC m=+8.483176174" lastFinishedPulling="2025-09-13 00:24:23.102956175 +0000 UTC m=+11.575439060" observedRunningTime="2025-09-13 00:24:24.726365006 +0000 UTC m=+13.198847891" watchObservedRunningTime="2025-09-13 00:24:25.620029701 +0000 UTC m=+14.092512586" Sep 13 00:24:25.707131 kubelet[2733]: E0913 00:24:25.706733 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:25.707131 kubelet[2733]: E0913 00:24:25.706902 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:28.785607 sudo[1802]: pam_unix(sudo:session): session closed for user root Sep 13 00:24:28.789280 sshd[1801]: Connection closed by 10.0.0.1 port 39624 Sep 13 00:24:28.791513 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Sep 13 00:24:28.797053 systemd[1]: sshd@6-10.0.0.78:22-10.0.0.1:39624.service: Deactivated successfully. Sep 13 00:24:28.799466 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:24:28.799973 systemd[1]: session-7.scope: Consumed 5.661s CPU time, 226.6M memory peak. Sep 13 00:24:28.801894 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:24:28.806163 systemd-logind[1582]: Removed session 7. Sep 13 00:24:31.321740 systemd[1]: Created slice kubepods-besteffort-pod7998d59a_2e3e_4b83_b025_12f88f57f773.slice - libcontainer container kubepods-besteffort-pod7998d59a_2e3e_4b83_b025_12f88f57f773.slice. Sep 13 00:24:31.419150 kubelet[2733]: I0913 00:24:31.419091 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb6d4\" (UniqueName: \"kubernetes.io/projected/7998d59a-2e3e-4b83-b025-12f88f57f773-kube-api-access-qb6d4\") pod \"calico-typha-597fbc4dfb-z4ddc\" (UID: \"7998d59a-2e3e-4b83-b025-12f88f57f773\") " pod="calico-system/calico-typha-597fbc4dfb-z4ddc" Sep 13 00:24:31.419150 kubelet[2733]: I0913 00:24:31.419136 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7998d59a-2e3e-4b83-b025-12f88f57f773-tigera-ca-bundle\") pod \"calico-typha-597fbc4dfb-z4ddc\" (UID: \"7998d59a-2e3e-4b83-b025-12f88f57f773\") " pod="calico-system/calico-typha-597fbc4dfb-z4ddc" Sep 13 00:24:31.419150 kubelet[2733]: I0913 00:24:31.419154 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7998d59a-2e3e-4b83-b025-12f88f57f773-typha-certs\") pod \"calico-typha-597fbc4dfb-z4ddc\" (UID: \"7998d59a-2e3e-4b83-b025-12f88f57f773\") " pod="calico-system/calico-typha-597fbc4dfb-z4ddc" Sep 13 00:24:31.506688 systemd[1]: Created slice kubepods-besteffort-podb5f9eaab_3c20_4e25_93ba_0a227f58d28c.slice - libcontainer container kubepods-besteffort-podb5f9eaab_3c20_4e25_93ba_0a227f58d28c.slice. Sep 13 00:24:31.520372 kubelet[2733]: I0913 00:24:31.519580 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-var-run-calico\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520372 kubelet[2733]: I0913 00:24:31.519962 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-cni-bin-dir\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520372 kubelet[2733]: I0913 00:24:31.519983 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-node-certs\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520372 kubelet[2733]: I0913 00:24:31.520008 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-cni-log-dir\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520372 kubelet[2733]: I0913 00:24:31.520028 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-var-lib-calico\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520680 kubelet[2733]: I0913 00:24:31.520050 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxqrb\" (UniqueName: \"kubernetes.io/projected/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-kube-api-access-qxqrb\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520680 kubelet[2733]: I0913 00:24:31.520096 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-policysync\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520680 kubelet[2733]: I0913 00:24:31.520111 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-xtables-lock\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520680 kubelet[2733]: I0913 00:24:31.520126 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-cni-net-dir\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520680 kubelet[2733]: I0913 00:24:31.520141 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-lib-modules\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520972 kubelet[2733]: I0913 00:24:31.520156 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-tigera-ca-bundle\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.520972 kubelet[2733]: I0913 00:24:31.520225 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b5f9eaab-3c20-4e25-93ba-0a227f58d28c-flexvol-driver-host\") pod \"calico-node-tlfp2\" (UID: \"b5f9eaab-3c20-4e25-93ba-0a227f58d28c\") " pod="calico-system/calico-node-tlfp2" Sep 13 00:24:31.624309 kubelet[2733]: E0913 00:24:31.624261 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.624309 kubelet[2733]: W0913 00:24:31.624286 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.624309 kubelet[2733]: E0913 00:24:31.624324 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.629037 kubelet[2733]: E0913 00:24:31.629002 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:31.630876 containerd[1594]: time="2025-09-13T00:24:31.630834642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-597fbc4dfb-z4ddc,Uid:7998d59a-2e3e-4b83-b025-12f88f57f773,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:31.631354 kubelet[2733]: E0913 00:24:31.631314 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.631354 kubelet[2733]: W0913 00:24:31.631330 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.631458 kubelet[2733]: E0913 00:24:31.631347 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.636441 kubelet[2733]: E0913 00:24:31.636407 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.636441 kubelet[2733]: W0913 00:24:31.636428 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.636519 kubelet[2733]: E0913 00:24:31.636444 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.751736 kubelet[2733]: E0913 00:24:31.751682 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh9k4" podUID="011904af-1a84-4b2a-90be-5056b5973c41" Sep 13 00:24:31.765234 containerd[1594]: time="2025-09-13T00:24:31.764980599Z" level=info msg="connecting to shim 0fc88dbaf5e702ac1671e24c81619392ad70326dea8d9189895afdb32b6bdd2c" address="unix:///run/containerd/s/285197681c70492442196d7b2e789e7fcb4283bd7929ac060464bb1e9f8ee743" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:24:31.800010 systemd[1]: Started cri-containerd-0fc88dbaf5e702ac1671e24c81619392ad70326dea8d9189895afdb32b6bdd2c.scope - libcontainer container 0fc88dbaf5e702ac1671e24c81619392ad70326dea8d9189895afdb32b6bdd2c. Sep 13 00:24:31.807476 kubelet[2733]: E0913 00:24:31.807421 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.807748 kubelet[2733]: W0913 00:24:31.807563 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.807748 kubelet[2733]: E0913 00:24:31.807595 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.808280 kubelet[2733]: E0913 00:24:31.808227 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.808280 kubelet[2733]: W0913 00:24:31.808243 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.808519 kubelet[2733]: E0913 00:24:31.808422 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.808853 kubelet[2733]: E0913 00:24:31.808836 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.809082 kubelet[2733]: W0913 00:24:31.808997 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.809082 kubelet[2733]: E0913 00:24:31.809019 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.809564 kubelet[2733]: E0913 00:24:31.809546 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.809741 kubelet[2733]: W0913 00:24:31.809662 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.809741 kubelet[2733]: E0913 00:24:31.809682 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.810201 kubelet[2733]: E0913 00:24:31.810173 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.810315 kubelet[2733]: W0913 00:24:31.810298 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.810437 kubelet[2733]: E0913 00:24:31.810416 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.810900 kubelet[2733]: E0913 00:24:31.810849 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.811080 kubelet[2733]: W0913 00:24:31.810978 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.811080 kubelet[2733]: E0913 00:24:31.810998 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.811448 kubelet[2733]: E0913 00:24:31.811370 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.811448 kubelet[2733]: W0913 00:24:31.811385 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.811448 kubelet[2733]: E0913 00:24:31.811397 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.811919 kubelet[2733]: E0913 00:24:31.811774 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.811919 kubelet[2733]: W0913 00:24:31.811840 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.811919 kubelet[2733]: E0913 00:24:31.811854 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.812417 kubelet[2733]: E0913 00:24:31.812367 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.812622 kubelet[2733]: W0913 00:24:31.812522 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.812622 kubelet[2733]: E0913 00:24:31.812541 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.813175 kubelet[2733]: E0913 00:24:31.813158 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.813321 kubelet[2733]: W0913 00:24:31.813251 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.813321 kubelet[2733]: E0913 00:24:31.813270 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.813686 kubelet[2733]: E0913 00:24:31.813669 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.813888 kubelet[2733]: W0913 00:24:31.813754 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.813888 kubelet[2733]: E0913 00:24:31.813770 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.814302 kubelet[2733]: E0913 00:24:31.814262 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.814411 kubelet[2733]: W0913 00:24:31.814395 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.814566 kubelet[2733]: E0913 00:24:31.814466 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.815397 kubelet[2733]: E0913 00:24:31.815285 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.815397 kubelet[2733]: W0913 00:24:31.815303 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.815397 kubelet[2733]: E0913 00:24:31.815317 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.816153 kubelet[2733]: E0913 00:24:31.816093 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.816564 kubelet[2733]: W0913 00:24:31.816495 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.816671 kubelet[2733]: E0913 00:24:31.816653 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.817126 kubelet[2733]: E0913 00:24:31.817109 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.817330 kubelet[2733]: W0913 00:24:31.817231 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.817330 kubelet[2733]: E0913 00:24:31.817251 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.818325 kubelet[2733]: E0913 00:24:31.818268 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.818325 kubelet[2733]: W0913 00:24:31.818285 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.818448 kubelet[2733]: E0913 00:24:31.818426 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.819270 containerd[1594]: time="2025-09-13T00:24:31.819139417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tlfp2,Uid:b5f9eaab-3c20-4e25-93ba-0a227f58d28c,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:31.819765 kubelet[2733]: E0913 00:24:31.819747 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.820038 kubelet[2733]: W0913 00:24:31.819829 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.820038 kubelet[2733]: E0913 00:24:31.819846 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.821362 kubelet[2733]: E0913 00:24:31.821174 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.821362 kubelet[2733]: W0913 00:24:31.821192 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.821362 kubelet[2733]: E0913 00:24:31.821205 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.822066 kubelet[2733]: E0913 00:24:31.821870 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.822066 kubelet[2733]: W0913 00:24:31.821934 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.822066 kubelet[2733]: E0913 00:24:31.821950 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.823084 kubelet[2733]: E0913 00:24:31.822973 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.823084 kubelet[2733]: W0913 00:24:31.823009 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.823084 kubelet[2733]: E0913 00:24:31.823023 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.825583 kubelet[2733]: E0913 00:24:31.824963 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.825583 kubelet[2733]: W0913 00:24:31.824986 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.825583 kubelet[2733]: E0913 00:24:31.825001 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.825583 kubelet[2733]: I0913 00:24:31.825071 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/011904af-1a84-4b2a-90be-5056b5973c41-socket-dir\") pod \"csi-node-driver-zh9k4\" (UID: \"011904af-1a84-4b2a-90be-5056b5973c41\") " pod="calico-system/csi-node-driver-zh9k4" Sep 13 00:24:31.826308 kubelet[2733]: E0913 00:24:31.826182 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.826308 kubelet[2733]: W0913 00:24:31.826199 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.826865 kubelet[2733]: E0913 00:24:31.826823 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.827209 kubelet[2733]: E0913 00:24:31.826915 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.827209 kubelet[2733]: W0913 00:24:31.827072 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.827209 kubelet[2733]: E0913 00:24:31.827093 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.827209 kubelet[2733]: I0913 00:24:31.827117 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/011904af-1a84-4b2a-90be-5056b5973c41-kubelet-dir\") pod \"csi-node-driver-zh9k4\" (UID: \"011904af-1a84-4b2a-90be-5056b5973c41\") " pod="calico-system/csi-node-driver-zh9k4" Sep 13 00:24:31.828475 kubelet[2733]: E0913 00:24:31.828102 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.828475 kubelet[2733]: W0913 00:24:31.828118 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.828689 kubelet[2733]: E0913 00:24:31.828599 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.829173 kubelet[2733]: E0913 00:24:31.829156 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.829551 kubelet[2733]: W0913 00:24:31.829342 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.829551 kubelet[2733]: E0913 00:24:31.829373 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.830131 kubelet[2733]: E0913 00:24:31.830014 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.830343 kubelet[2733]: W0913 00:24:31.830307 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.831141 kubelet[2733]: E0913 00:24:31.830496 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.831342 kubelet[2733]: E0913 00:24:31.831325 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.831420 kubelet[2733]: W0913 00:24:31.831404 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.831497 kubelet[2733]: E0913 00:24:31.831480 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.831743 kubelet[2733]: I0913 00:24:31.831699 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/011904af-1a84-4b2a-90be-5056b5973c41-varrun\") pod \"csi-node-driver-zh9k4\" (UID: \"011904af-1a84-4b2a-90be-5056b5973c41\") " pod="calico-system/csi-node-driver-zh9k4" Sep 13 00:24:31.832130 kubelet[2733]: E0913 00:24:31.832097 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.832130 kubelet[2733]: W0913 00:24:31.832127 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.832297 kubelet[2733]: E0913 00:24:31.832161 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.832985 kubelet[2733]: E0913 00:24:31.832826 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.832985 kubelet[2733]: W0913 00:24:31.832861 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.832985 kubelet[2733]: E0913 00:24:31.832884 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.832985 kubelet[2733]: I0913 00:24:31.832903 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/011904af-1a84-4b2a-90be-5056b5973c41-registration-dir\") pod \"csi-node-driver-zh9k4\" (UID: \"011904af-1a84-4b2a-90be-5056b5973c41\") " pod="calico-system/csi-node-driver-zh9k4" Sep 13 00:24:31.833381 kubelet[2733]: E0913 00:24:31.833128 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.833381 kubelet[2733]: W0913 00:24:31.833140 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.833381 kubelet[2733]: E0913 00:24:31.833152 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.834074 kubelet[2733]: E0913 00:24:31.833409 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.834074 kubelet[2733]: W0913 00:24:31.833418 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.834074 kubelet[2733]: E0913 00:24:31.833436 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.834074 kubelet[2733]: E0913 00:24:31.833617 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.834074 kubelet[2733]: W0913 00:24:31.833626 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.834074 kubelet[2733]: E0913 00:24:31.833648 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.834074 kubelet[2733]: E0913 00:24:31.833934 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.834074 kubelet[2733]: W0913 00:24:31.833943 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.834074 kubelet[2733]: E0913 00:24:31.833953 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.835092 kubelet[2733]: I0913 00:24:31.834003 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcgjl\" (UniqueName: \"kubernetes.io/projected/011904af-1a84-4b2a-90be-5056b5973c41-kube-api-access-xcgjl\") pod \"csi-node-driver-zh9k4\" (UID: \"011904af-1a84-4b2a-90be-5056b5973c41\") " pod="calico-system/csi-node-driver-zh9k4" Sep 13 00:24:31.835092 kubelet[2733]: E0913 00:24:31.834399 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.835092 kubelet[2733]: W0913 00:24:31.834411 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.835092 kubelet[2733]: E0913 00:24:31.834422 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.835092 kubelet[2733]: E0913 00:24:31.834602 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.835092 kubelet[2733]: W0913 00:24:31.834611 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.835092 kubelet[2733]: E0913 00:24:31.834620 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.861486 containerd[1594]: time="2025-09-13T00:24:31.861422228Z" level=info msg="connecting to shim 9fa63d3de2f32288e608f9778f8f59cb3c5902954d5d3c3bb79722ba616a1207" address="unix:///run/containerd/s/9bf44e24f86a6ddca644f9d67719111f872b07b6c44aec5609b6a8ed59552558" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:24:31.926963 systemd[1]: Started cri-containerd-9fa63d3de2f32288e608f9778f8f59cb3c5902954d5d3c3bb79722ba616a1207.scope - libcontainer container 9fa63d3de2f32288e608f9778f8f59cb3c5902954d5d3c3bb79722ba616a1207. Sep 13 00:24:31.934486 kubelet[2733]: E0913 00:24:31.934435 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.934486 kubelet[2733]: W0913 00:24:31.934458 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.934486 kubelet[2733]: E0913 00:24:31.934478 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.934761 kubelet[2733]: E0913 00:24:31.934719 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.934761 kubelet[2733]: W0913 00:24:31.934727 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.934761 kubelet[2733]: E0913 00:24:31.934742 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.935288 kubelet[2733]: E0913 00:24:31.935030 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.935288 kubelet[2733]: W0913 00:24:31.935039 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.935288 kubelet[2733]: E0913 00:24:31.935057 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.935288 kubelet[2733]: E0913 00:24:31.935260 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.935288 kubelet[2733]: W0913 00:24:31.935268 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.935288 kubelet[2733]: E0913 00:24:31.935282 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.935849 kubelet[2733]: E0913 00:24:31.935803 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.935849 kubelet[2733]: W0913 00:24:31.935812 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.935849 kubelet[2733]: E0913 00:24:31.935826 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.936078 kubelet[2733]: E0913 00:24:31.936059 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.936078 kubelet[2733]: W0913 00:24:31.936068 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.936204 kubelet[2733]: E0913 00:24:31.936110 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.936349 kubelet[2733]: E0913 00:24:31.936311 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.936349 kubelet[2733]: W0913 00:24:31.936335 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.936466 kubelet[2733]: E0913 00:24:31.936425 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.936669 kubelet[2733]: E0913 00:24:31.936647 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.936669 kubelet[2733]: W0913 00:24:31.936663 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.936817 kubelet[2733]: E0913 00:24:31.936717 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.936914 kubelet[2733]: E0913 00:24:31.936893 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.936914 kubelet[2733]: W0913 00:24:31.936904 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.937029 kubelet[2733]: E0913 00:24:31.936957 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.937156 kubelet[2733]: E0913 00:24:31.937136 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.937156 kubelet[2733]: W0913 00:24:31.937149 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.937285 kubelet[2733]: E0913 00:24:31.937200 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.937422 kubelet[2733]: E0913 00:24:31.937388 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.937422 kubelet[2733]: W0913 00:24:31.937400 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.937547 kubelet[2733]: E0913 00:24:31.937501 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.937628 kubelet[2733]: E0913 00:24:31.937589 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.937628 kubelet[2733]: W0913 00:24:31.937596 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.937628 kubelet[2733]: E0913 00:24:31.937616 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.937945 kubelet[2733]: E0913 00:24:31.937882 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.937945 kubelet[2733]: W0913 00:24:31.937894 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.937945 kubelet[2733]: E0913 00:24:31.937921 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.938222 kubelet[2733]: E0913 00:24:31.938199 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.938222 kubelet[2733]: W0913 00:24:31.938214 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.938339 kubelet[2733]: E0913 00:24:31.938230 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.938507 kubelet[2733]: E0913 00:24:31.938467 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.938507 kubelet[2733]: W0913 00:24:31.938482 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.938626 kubelet[2733]: E0913 00:24:31.938545 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.938865 kubelet[2733]: E0913 00:24:31.938843 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.938865 kubelet[2733]: W0913 00:24:31.938856 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.938982 kubelet[2733]: E0913 00:24:31.938945 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.939047 kubelet[2733]: E0913 00:24:31.939028 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.939047 kubelet[2733]: W0913 00:24:31.939040 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.939151 kubelet[2733]: E0913 00:24:31.939089 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.939310 kubelet[2733]: E0913 00:24:31.939267 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.939310 kubelet[2733]: W0913 00:24:31.939278 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.939442 kubelet[2733]: E0913 00:24:31.939359 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.939586 kubelet[2733]: E0913 00:24:31.939541 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.939586 kubelet[2733]: W0913 00:24:31.939553 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.939586 kubelet[2733]: E0913 00:24:31.939574 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.939918 kubelet[2733]: E0913 00:24:31.939897 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.939918 kubelet[2733]: W0913 00:24:31.939915 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.940027 kubelet[2733]: E0913 00:24:31.939941 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.940345 kubelet[2733]: E0913 00:24:31.940322 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.940345 kubelet[2733]: W0913 00:24:31.940337 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.940450 kubelet[2733]: E0913 00:24:31.940438 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.941056 kubelet[2733]: E0913 00:24:31.941007 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.941056 kubelet[2733]: W0913 00:24:31.941040 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.941158 kubelet[2733]: E0913 00:24:31.941083 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.941414 kubelet[2733]: E0913 00:24:31.941391 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.941414 kubelet[2733]: W0913 00:24:31.941404 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.941501 kubelet[2733]: E0913 00:24:31.941420 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.941741 kubelet[2733]: E0913 00:24:31.941716 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.941911 kubelet[2733]: W0913 00:24:31.941827 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.941911 kubelet[2733]: E0913 00:24:31.941846 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.942259 kubelet[2733]: E0913 00:24:31.942198 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.942259 kubelet[2733]: W0913 00:24:31.942215 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.942259 kubelet[2733]: E0913 00:24:31.942231 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.944438 containerd[1594]: time="2025-09-13T00:24:31.944304113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-597fbc4dfb-z4ddc,Uid:7998d59a-2e3e-4b83-b025-12f88f57f773,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fc88dbaf5e702ac1671e24c81619392ad70326dea8d9189895afdb32b6bdd2c\"" Sep 13 00:24:31.947668 kubelet[2733]: E0913 00:24:31.947289 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:31.950694 containerd[1594]: time="2025-09-13T00:24:31.950574555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:24:31.954593 kubelet[2733]: E0913 00:24:31.954557 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:31.954593 kubelet[2733]: W0913 00:24:31.954583 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:31.954816 kubelet[2733]: E0913 00:24:31.954610 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:31.975519 containerd[1594]: time="2025-09-13T00:24:31.975430068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tlfp2,Uid:b5f9eaab-3c20-4e25-93ba-0a227f58d28c,Namespace:calico-system,Attempt:0,} returns sandbox id \"9fa63d3de2f32288e608f9778f8f59cb3c5902954d5d3c3bb79722ba616a1207\"" Sep 13 00:24:33.611847 kubelet[2733]: E0913 00:24:33.611804 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh9k4" podUID="011904af-1a84-4b2a-90be-5056b5973c41" Sep 13 00:24:34.362103 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2435392420.mount: Deactivated successfully. Sep 13 00:24:35.609816 kubelet[2733]: E0913 00:24:35.608762 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh9k4" podUID="011904af-1a84-4b2a-90be-5056b5973c41" Sep 13 00:24:36.904155 containerd[1594]: time="2025-09-13T00:24:36.904067671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:36.905965 containerd[1594]: time="2025-09-13T00:24:36.905918865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:24:36.980135 containerd[1594]: time="2025-09-13T00:24:36.980064877Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:37.057352 containerd[1594]: time="2025-09-13T00:24:37.057245556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:37.058085 containerd[1594]: time="2025-09-13T00:24:37.058021248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 5.106459596s" Sep 13 00:24:37.058085 containerd[1594]: time="2025-09-13T00:24:37.058062627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:24:37.059972 containerd[1594]: time="2025-09-13T00:24:37.059925121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:24:37.072852 containerd[1594]: time="2025-09-13T00:24:37.072742240Z" level=info msg="CreateContainer within sandbox \"0fc88dbaf5e702ac1671e24c81619392ad70326dea8d9189895afdb32b6bdd2c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:24:37.087757 containerd[1594]: time="2025-09-13T00:24:37.087676183Z" level=info msg="Container 985d0ce4bd3472c4530e71d1938fdb24b2eb77c6c406e2a6806f5b144e3529e6: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:24:37.100659 containerd[1594]: time="2025-09-13T00:24:37.100583072Z" level=info msg="CreateContainer within sandbox \"0fc88dbaf5e702ac1671e24c81619392ad70326dea8d9189895afdb32b6bdd2c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"985d0ce4bd3472c4530e71d1938fdb24b2eb77c6c406e2a6806f5b144e3529e6\"" Sep 13 00:24:37.101620 containerd[1594]: time="2025-09-13T00:24:37.101599490Z" level=info msg="StartContainer for \"985d0ce4bd3472c4530e71d1938fdb24b2eb77c6c406e2a6806f5b144e3529e6\"" Sep 13 00:24:37.103123 containerd[1594]: time="2025-09-13T00:24:37.103059304Z" level=info msg="connecting to shim 985d0ce4bd3472c4530e71d1938fdb24b2eb77c6c406e2a6806f5b144e3529e6" address="unix:///run/containerd/s/285197681c70492442196d7b2e789e7fcb4283bd7929ac060464bb1e9f8ee743" protocol=ttrpc version=3 Sep 13 00:24:37.131960 systemd[1]: Started cri-containerd-985d0ce4bd3472c4530e71d1938fdb24b2eb77c6c406e2a6806f5b144e3529e6.scope - libcontainer container 985d0ce4bd3472c4530e71d1938fdb24b2eb77c6c406e2a6806f5b144e3529e6. Sep 13 00:24:37.189204 containerd[1594]: time="2025-09-13T00:24:37.189083761Z" level=info msg="StartContainer for \"985d0ce4bd3472c4530e71d1938fdb24b2eb77c6c406e2a6806f5b144e3529e6\" returns successfully" Sep 13 00:24:37.610850 kubelet[2733]: E0913 00:24:37.608734 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh9k4" podUID="011904af-1a84-4b2a-90be-5056b5973c41" Sep 13 00:24:37.732885 kubelet[2733]: E0913 00:24:37.732848 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:37.742767 kubelet[2733]: I0913 00:24:37.742538 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-597fbc4dfb-z4ddc" podStartSLOduration=1.632007753 podStartE2EDuration="6.742520871s" podCreationTimestamp="2025-09-13 00:24:31 +0000 UTC" firstStartedPulling="2025-09-13 00:24:31.948639565 +0000 UTC m=+20.421122450" lastFinishedPulling="2025-09-13 00:24:37.059152683 +0000 UTC m=+25.531635568" observedRunningTime="2025-09-13 00:24:37.742276139 +0000 UTC m=+26.214759024" watchObservedRunningTime="2025-09-13 00:24:37.742520871 +0000 UTC m=+26.215003756" Sep 13 00:24:37.760504 kubelet[2733]: E0913 00:24:37.760479 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.760504 kubelet[2733]: W0913 00:24:37.760499 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.760633 kubelet[2733]: E0913 00:24:37.760520 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.760759 kubelet[2733]: E0913 00:24:37.760740 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.760759 kubelet[2733]: W0913 00:24:37.760752 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.760879 kubelet[2733]: E0913 00:24:37.760766 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.761007 kubelet[2733]: E0913 00:24:37.760989 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.761007 kubelet[2733]: W0913 00:24:37.761003 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.761082 kubelet[2733]: E0913 00:24:37.761014 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.761251 kubelet[2733]: E0913 00:24:37.761236 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.761251 kubelet[2733]: W0913 00:24:37.761249 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.761298 kubelet[2733]: E0913 00:24:37.761259 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.761456 kubelet[2733]: E0913 00:24:37.761442 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.761456 kubelet[2733]: W0913 00:24:37.761454 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.761521 kubelet[2733]: E0913 00:24:37.761464 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.761745 kubelet[2733]: E0913 00:24:37.761644 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.761745 kubelet[2733]: W0913 00:24:37.761660 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.761745 kubelet[2733]: E0913 00:24:37.761671 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.761917 kubelet[2733]: E0913 00:24:37.761898 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.761917 kubelet[2733]: W0913 00:24:37.761912 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.762027 kubelet[2733]: E0913 00:24:37.761925 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.762121 kubelet[2733]: E0913 00:24:37.762105 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.762121 kubelet[2733]: W0913 00:24:37.762117 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.762217 kubelet[2733]: E0913 00:24:37.762128 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.762341 kubelet[2733]: E0913 00:24:37.762314 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.762341 kubelet[2733]: W0913 00:24:37.762338 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.762409 kubelet[2733]: E0913 00:24:37.762349 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.762531 kubelet[2733]: E0913 00:24:37.762516 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.762531 kubelet[2733]: W0913 00:24:37.762528 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.762597 kubelet[2733]: E0913 00:24:37.762539 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.762771 kubelet[2733]: E0913 00:24:37.762746 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.762771 kubelet[2733]: W0913 00:24:37.762759 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.762771 kubelet[2733]: E0913 00:24:37.762768 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.762997 kubelet[2733]: E0913 00:24:37.762982 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.762997 kubelet[2733]: W0913 00:24:37.762995 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.763060 kubelet[2733]: E0913 00:24:37.763005 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.763224 kubelet[2733]: E0913 00:24:37.763211 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.763224 kubelet[2733]: W0913 00:24:37.763223 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.763281 kubelet[2733]: E0913 00:24:37.763231 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.763847 kubelet[2733]: E0913 00:24:37.763424 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.763847 kubelet[2733]: W0913 00:24:37.763448 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.763847 kubelet[2733]: E0913 00:24:37.763459 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.763847 kubelet[2733]: E0913 00:24:37.763666 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.763847 kubelet[2733]: W0913 00:24:37.763676 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.763847 kubelet[2733]: E0913 00:24:37.763697 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.774381 kubelet[2733]: E0913 00:24:37.774356 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.774381 kubelet[2733]: W0913 00:24:37.774382 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.774506 kubelet[2733]: E0913 00:24:37.774395 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.774580 kubelet[2733]: E0913 00:24:37.774564 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.774580 kubelet[2733]: W0913 00:24:37.774574 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.774660 kubelet[2733]: E0913 00:24:37.774586 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.774821 kubelet[2733]: E0913 00:24:37.774779 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.774821 kubelet[2733]: W0913 00:24:37.774818 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.774891 kubelet[2733]: E0913 00:24:37.774835 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.775125 kubelet[2733]: E0913 00:24:37.775100 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.775125 kubelet[2733]: W0913 00:24:37.775117 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.775179 kubelet[2733]: E0913 00:24:37.775134 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.775410 kubelet[2733]: E0913 00:24:37.775382 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.775410 kubelet[2733]: W0913 00:24:37.775398 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.775468 kubelet[2733]: E0913 00:24:37.775426 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.775647 kubelet[2733]: E0913 00:24:37.775633 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.775647 kubelet[2733]: W0913 00:24:37.775645 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.775719 kubelet[2733]: E0913 00:24:37.775659 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.775874 kubelet[2733]: E0913 00:24:37.775858 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.775874 kubelet[2733]: W0913 00:24:37.775871 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.775937 kubelet[2733]: E0913 00:24:37.775884 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.776097 kubelet[2733]: E0913 00:24:37.776082 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.776122 kubelet[2733]: W0913 00:24:37.776095 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.776122 kubelet[2733]: E0913 00:24:37.776111 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.776341 kubelet[2733]: E0913 00:24:37.776320 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.776341 kubelet[2733]: W0913 00:24:37.776336 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.776440 kubelet[2733]: E0913 00:24:37.776349 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.776600 kubelet[2733]: E0913 00:24:37.776582 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.776600 kubelet[2733]: W0913 00:24:37.776597 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.776700 kubelet[2733]: E0913 00:24:37.776609 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.776882 kubelet[2733]: E0913 00:24:37.776858 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.776882 kubelet[2733]: W0913 00:24:37.776870 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.776882 kubelet[2733]: E0913 00:24:37.776880 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.777157 kubelet[2733]: E0913 00:24:37.777135 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.777157 kubelet[2733]: W0913 00:24:37.777149 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.777253 kubelet[2733]: E0913 00:24:37.777165 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.777377 kubelet[2733]: E0913 00:24:37.777360 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.777377 kubelet[2733]: W0913 00:24:37.777373 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.777457 kubelet[2733]: E0913 00:24:37.777384 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.777545 kubelet[2733]: E0913 00:24:37.777531 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.777545 kubelet[2733]: W0913 00:24:37.777541 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.777624 kubelet[2733]: E0913 00:24:37.777554 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.777742 kubelet[2733]: E0913 00:24:37.777726 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.777742 kubelet[2733]: W0913 00:24:37.777737 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.777854 kubelet[2733]: E0913 00:24:37.777750 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.777968 kubelet[2733]: E0913 00:24:37.777949 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.777968 kubelet[2733]: W0913 00:24:37.777961 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.778049 kubelet[2733]: E0913 00:24:37.777972 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.778213 kubelet[2733]: E0913 00:24:37.778195 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.778213 kubelet[2733]: W0913 00:24:37.778207 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.778302 kubelet[2733]: E0913 00:24:37.778217 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:37.778586 kubelet[2733]: E0913 00:24:37.778557 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:37.778586 kubelet[2733]: W0913 00:24:37.778569 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:37.778586 kubelet[2733]: E0913 00:24:37.778579 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.734259 kubelet[2733]: I0913 00:24:38.734226 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:38.734694 kubelet[2733]: E0913 00:24:38.734515 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:38.770507 kubelet[2733]: E0913 00:24:38.770479 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.770507 kubelet[2733]: W0913 00:24:38.770497 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.770507 kubelet[2733]: E0913 00:24:38.770515 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.770779 kubelet[2733]: E0913 00:24:38.770756 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.770840 kubelet[2733]: W0913 00:24:38.770775 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.770840 kubelet[2733]: E0913 00:24:38.770817 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.771033 kubelet[2733]: E0913 00:24:38.771007 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.771033 kubelet[2733]: W0913 00:24:38.771020 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.771033 kubelet[2733]: E0913 00:24:38.771030 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.771302 kubelet[2733]: E0913 00:24:38.771286 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.771302 kubelet[2733]: W0913 00:24:38.771298 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.771373 kubelet[2733]: E0913 00:24:38.771333 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.771534 kubelet[2733]: E0913 00:24:38.771510 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.771534 kubelet[2733]: W0913 00:24:38.771522 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.771534 kubelet[2733]: E0913 00:24:38.771532 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.771775 kubelet[2733]: E0913 00:24:38.771754 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.771775 kubelet[2733]: W0913 00:24:38.771770 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.771870 kubelet[2733]: E0913 00:24:38.771798 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.772007 kubelet[2733]: E0913 00:24:38.771990 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.772007 kubelet[2733]: W0913 00:24:38.772003 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.772075 kubelet[2733]: E0913 00:24:38.772013 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.772199 kubelet[2733]: E0913 00:24:38.772182 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.772199 kubelet[2733]: W0913 00:24:38.772195 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.772272 kubelet[2733]: E0913 00:24:38.772204 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.772433 kubelet[2733]: E0913 00:24:38.772415 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.772433 kubelet[2733]: W0913 00:24:38.772428 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.772510 kubelet[2733]: E0913 00:24:38.772440 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.772682 kubelet[2733]: E0913 00:24:38.772651 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.772682 kubelet[2733]: W0913 00:24:38.772664 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.772763 kubelet[2733]: E0913 00:24:38.772687 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.772921 kubelet[2733]: E0913 00:24:38.772888 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.772921 kubelet[2733]: W0913 00:24:38.772901 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.772921 kubelet[2733]: E0913 00:24:38.772912 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.773139 kubelet[2733]: E0913 00:24:38.773121 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.773139 kubelet[2733]: W0913 00:24:38.773133 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.773219 kubelet[2733]: E0913 00:24:38.773144 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.773335 kubelet[2733]: E0913 00:24:38.773319 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.773335 kubelet[2733]: W0913 00:24:38.773330 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.773430 kubelet[2733]: E0913 00:24:38.773340 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.773521 kubelet[2733]: E0913 00:24:38.773502 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.773521 kubelet[2733]: W0913 00:24:38.773514 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.773591 kubelet[2733]: E0913 00:24:38.773524 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.773760 kubelet[2733]: E0913 00:24:38.773736 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.773760 kubelet[2733]: W0913 00:24:38.773750 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.773760 kubelet[2733]: E0913 00:24:38.773763 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.777902 containerd[1594]: time="2025-09-13T00:24:38.777856973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:38.778691 containerd[1594]: time="2025-09-13T00:24:38.778654357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:24:38.779939 containerd[1594]: time="2025-09-13T00:24:38.779908872Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:38.780065 kubelet[2733]: E0913 00:24:38.780039 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.780065 kubelet[2733]: W0913 00:24:38.780054 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.780136 kubelet[2733]: E0913 00:24:38.780067 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.780338 kubelet[2733]: E0913 00:24:38.780322 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.780338 kubelet[2733]: W0913 00:24:38.780335 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.780413 kubelet[2733]: E0913 00:24:38.780351 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.780609 kubelet[2733]: E0913 00:24:38.780555 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.780609 kubelet[2733]: W0913 00:24:38.780566 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.780609 kubelet[2733]: E0913 00:24:38.780578 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.780889 kubelet[2733]: E0913 00:24:38.780870 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.780889 kubelet[2733]: W0913 00:24:38.780885 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.780976 kubelet[2733]: E0913 00:24:38.780917 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.781158 kubelet[2733]: E0913 00:24:38.781138 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.781158 kubelet[2733]: W0913 00:24:38.781153 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.781244 kubelet[2733]: E0913 00:24:38.781170 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.781404 kubelet[2733]: E0913 00:24:38.781388 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.781404 kubelet[2733]: W0913 00:24:38.781401 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.781473 kubelet[2733]: E0913 00:24:38.781418 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.781681 kubelet[2733]: E0913 00:24:38.781649 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.781681 kubelet[2733]: W0913 00:24:38.781664 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.781758 kubelet[2733]: E0913 00:24:38.781693 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.782026 kubelet[2733]: E0913 00:24:38.782006 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.782026 kubelet[2733]: W0913 00:24:38.782019 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.782110 kubelet[2733]: E0913 00:24:38.782036 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.782241 containerd[1594]: time="2025-09-13T00:24:38.782213859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:38.783840 kubelet[2733]: E0913 00:24:38.782876 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.783840 kubelet[2733]: W0913 00:24:38.782917 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.783840 kubelet[2733]: E0913 00:24:38.782999 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.783840 kubelet[2733]: E0913 00:24:38.783253 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.783840 kubelet[2733]: W0913 00:24:38.783264 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.783840 kubelet[2733]: E0913 00:24:38.783295 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.783840 kubelet[2733]: E0913 00:24:38.783475 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.783840 kubelet[2733]: W0913 00:24:38.783485 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.783840 kubelet[2733]: E0913 00:24:38.783533 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.783840 kubelet[2733]: E0913 00:24:38.783695 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.784179 containerd[1594]: time="2025-09-13T00:24:38.783068180Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.72310606s" Sep 13 00:24:38.784179 containerd[1594]: time="2025-09-13T00:24:38.783097535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:24:38.784245 kubelet[2733]: W0913 00:24:38.783706 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.784245 kubelet[2733]: E0913 00:24:38.783721 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.784245 kubelet[2733]: E0913 00:24:38.783952 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.784245 kubelet[2733]: W0913 00:24:38.783962 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.784245 kubelet[2733]: E0913 00:24:38.783991 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.784587 kubelet[2733]: E0913 00:24:38.784569 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.784706 kubelet[2733]: W0913 00:24:38.784652 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.784706 kubelet[2733]: E0913 00:24:38.784687 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.784972 kubelet[2733]: E0913 00:24:38.784930 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.784972 kubelet[2733]: W0913 00:24:38.784952 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.784972 kubelet[2733]: E0913 00:24:38.784963 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.785247 kubelet[2733]: E0913 00:24:38.785167 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.785247 kubelet[2733]: W0913 00:24:38.785177 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.785247 kubelet[2733]: E0913 00:24:38.785187 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.785421 containerd[1594]: time="2025-09-13T00:24:38.785031002Z" level=info msg="CreateContainer within sandbox \"9fa63d3de2f32288e608f9778f8f59cb3c5902954d5d3c3bb79722ba616a1207\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:24:38.785618 kubelet[2733]: E0913 00:24:38.785597 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.785618 kubelet[2733]: W0913 00:24:38.785612 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.785730 kubelet[2733]: E0913 00:24:38.785624 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.786898 kubelet[2733]: E0913 00:24:38.786875 2733 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:24:38.786898 kubelet[2733]: W0913 00:24:38.786891 2733 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:24:38.786989 kubelet[2733]: E0913 00:24:38.786903 2733 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:24:38.794319 containerd[1594]: time="2025-09-13T00:24:38.794278581Z" level=info msg="Container ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:24:38.803437 containerd[1594]: time="2025-09-13T00:24:38.803400283Z" level=info msg="CreateContainer within sandbox \"9fa63d3de2f32288e608f9778f8f59cb3c5902954d5d3c3bb79722ba616a1207\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a\"" Sep 13 00:24:38.803931 containerd[1594]: time="2025-09-13T00:24:38.803894335Z" level=info msg="StartContainer for \"ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a\"" Sep 13 00:24:38.805518 containerd[1594]: time="2025-09-13T00:24:38.805479824Z" level=info msg="connecting to shim ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a" address="unix:///run/containerd/s/9bf44e24f86a6ddca644f9d67719111f872b07b6c44aec5609b6a8ed59552558" protocol=ttrpc version=3 Sep 13 00:24:38.831920 systemd[1]: Started cri-containerd-ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a.scope - libcontainer container ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a. Sep 13 00:24:38.874225 containerd[1594]: time="2025-09-13T00:24:38.874182837Z" level=info msg="StartContainer for \"ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a\" returns successfully" Sep 13 00:24:38.884620 systemd[1]: cri-containerd-ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a.scope: Deactivated successfully. Sep 13 00:24:38.886118 containerd[1594]: time="2025-09-13T00:24:38.886081174Z" level=info msg="received exit event container_id:\"ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a\" id:\"ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a\" pid:3476 exited_at:{seconds:1757723078 nanos:885724722}" Sep 13 00:24:38.886176 containerd[1594]: time="2025-09-13T00:24:38.886161466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a\" id:\"ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a\" pid:3476 exited_at:{seconds:1757723078 nanos:885724722}" Sep 13 00:24:38.908812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad98e2fcba837f97ab3fa700d93ddad23f8b0c0c4d5ed9ca65d17eef3ba7980a-rootfs.mount: Deactivated successfully. Sep 13 00:24:39.608834 kubelet[2733]: E0913 00:24:39.608763 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh9k4" podUID="011904af-1a84-4b2a-90be-5056b5973c41" Sep 13 00:24:40.742574 containerd[1594]: time="2025-09-13T00:24:40.742419817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:24:41.609195 kubelet[2733]: E0913 00:24:41.608827 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh9k4" podUID="011904af-1a84-4b2a-90be-5056b5973c41" Sep 13 00:24:43.608886 kubelet[2733]: E0913 00:24:43.608821 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh9k4" podUID="011904af-1a84-4b2a-90be-5056b5973c41" Sep 13 00:24:45.608799 kubelet[2733]: E0913 00:24:45.608718 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh9k4" podUID="011904af-1a84-4b2a-90be-5056b5973c41" Sep 13 00:24:46.390113 containerd[1594]: time="2025-09-13T00:24:46.390045172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:46.392137 containerd[1594]: time="2025-09-13T00:24:46.392059533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:24:46.394114 containerd[1594]: time="2025-09-13T00:24:46.394063683Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:46.397006 containerd[1594]: time="2025-09-13T00:24:46.396951527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:24:46.397763 containerd[1594]: time="2025-09-13T00:24:46.397695828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.655226588s" Sep 13 00:24:46.397763 containerd[1594]: time="2025-09-13T00:24:46.397751001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:24:46.400000 containerd[1594]: time="2025-09-13T00:24:46.399945691Z" level=info msg="CreateContainer within sandbox \"9fa63d3de2f32288e608f9778f8f59cb3c5902954d5d3c3bb79722ba616a1207\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:24:46.412858 containerd[1594]: time="2025-09-13T00:24:46.412805729Z" level=info msg="Container 8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:24:46.426068 containerd[1594]: time="2025-09-13T00:24:46.426020914Z" level=info msg="CreateContainer within sandbox \"9fa63d3de2f32288e608f9778f8f59cb3c5902954d5d3c3bb79722ba616a1207\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7\"" Sep 13 00:24:46.426688 containerd[1594]: time="2025-09-13T00:24:46.426644467Z" level=info msg="StartContainer for \"8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7\"" Sep 13 00:24:46.428267 containerd[1594]: time="2025-09-13T00:24:46.428231033Z" level=info msg="connecting to shim 8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7" address="unix:///run/containerd/s/9bf44e24f86a6ddca644f9d67719111f872b07b6c44aec5609b6a8ed59552558" protocol=ttrpc version=3 Sep 13 00:24:46.455024 systemd[1]: Started cri-containerd-8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7.scope - libcontainer container 8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7. Sep 13 00:24:46.505853 containerd[1594]: time="2025-09-13T00:24:46.505803723Z" level=info msg="StartContainer for \"8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7\" returns successfully" Sep 13 00:24:47.609386 kubelet[2733]: E0913 00:24:47.609333 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zh9k4" podUID="011904af-1a84-4b2a-90be-5056b5973c41" Sep 13 00:24:49.268192 containerd[1594]: time="2025-09-13T00:24:49.268133690Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:24:49.271989 systemd[1]: cri-containerd-8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7.scope: Deactivated successfully. Sep 13 00:24:49.272420 systemd[1]: cri-containerd-8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7.scope: Consumed 620ms CPU time, 179.3M memory peak, 2.8M read from disk, 171.3M written to disk. Sep 13 00:24:49.273848 containerd[1594]: time="2025-09-13T00:24:49.273777473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7\" id:\"8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7\" pid:3535 exited_at:{seconds:1757723089 nanos:273421174}" Sep 13 00:24:49.274520 containerd[1594]: time="2025-09-13T00:24:49.274481297Z" level=info msg="received exit event container_id:\"8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7\" id:\"8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7\" pid:3535 exited_at:{seconds:1757723089 nanos:273421174}" Sep 13 00:24:49.299071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ac9ce39525f7dd04820ad442f909aa54ce1db479c87de996fc5bd964c0e95c7-rootfs.mount: Deactivated successfully. Sep 13 00:24:49.311156 kubelet[2733]: I0913 00:24:49.310417 2733 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:24:49.405656 systemd[1]: Created slice kubepods-burstable-podc24bc14a_01c5_4568_b35c_bcb58b6575e4.slice - libcontainer container kubepods-burstable-podc24bc14a_01c5_4568_b35c_bcb58b6575e4.slice. Sep 13 00:24:49.424661 systemd[1]: Created slice kubepods-besteffort-pod89c5ca25_5c32_4aa7_8832_b12816552076.slice - libcontainer container kubepods-besteffort-pod89c5ca25_5c32_4aa7_8832_b12816552076.slice. Sep 13 00:24:49.431417 systemd[1]: Created slice kubepods-besteffort-pod2e8d886d_fbcb_4799_a16f_a0e63452b1a6.slice - libcontainer container kubepods-besteffort-pod2e8d886d_fbcb_4799_a16f_a0e63452b1a6.slice. Sep 13 00:24:49.438881 systemd[1]: Created slice kubepods-besteffort-podaa9cc419_68f4_433f_aaf2_3ef2cf0d2915.slice - libcontainer container kubepods-besteffort-podaa9cc419_68f4_433f_aaf2_3ef2cf0d2915.slice. Sep 13 00:24:49.444557 systemd[1]: Created slice kubepods-burstable-podaba01603_30d6_40a1_a7ff_d2401c8a52b4.slice - libcontainer container kubepods-burstable-podaba01603_30d6_40a1_a7ff_d2401c8a52b4.slice. Sep 13 00:24:49.450824 systemd[1]: Created slice kubepods-besteffort-pod232536de_9922_44d0_aa94_90d23c7df90a.slice - libcontainer container kubepods-besteffort-pod232536de_9922_44d0_aa94_90d23c7df90a.slice. Sep 13 00:24:49.455715 systemd[1]: Created slice kubepods-besteffort-pod2be952e7_8835_4c16_a30d_a8e92050cd73.slice - libcontainer container kubepods-besteffort-pod2be952e7_8835_4c16_a30d_a8e92050cd73.slice. Sep 13 00:24:49.463275 kubelet[2733]: I0913 00:24:49.463232 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdw8b\" (UniqueName: \"kubernetes.io/projected/2be952e7-8835-4c16-a30d-a8e92050cd73-kube-api-access-qdw8b\") pod \"calico-apiserver-6c686fdbf4-mjkpg\" (UID: \"2be952e7-8835-4c16-a30d-a8e92050cd73\") " pod="calico-apiserver/calico-apiserver-6c686fdbf4-mjkpg" Sep 13 00:24:49.463275 kubelet[2733]: I0913 00:24:49.463268 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89c5ca25-5c32-4aa7-8832-b12816552076-calico-apiserver-certs\") pod \"calico-apiserver-6c686fdbf4-6mwrv\" (UID: \"89c5ca25-5c32-4aa7-8832-b12816552076\") " pod="calico-apiserver/calico-apiserver-6c686fdbf4-6mwrv" Sep 13 00:24:49.463445 kubelet[2733]: I0913 00:24:49.463286 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/232536de-9922-44d0-aa94-90d23c7df90a-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-wq4t8\" (UID: \"232536de-9922-44d0-aa94-90d23c7df90a\") " pod="calico-system/goldmane-54d579b49d-wq4t8" Sep 13 00:24:49.463445 kubelet[2733]: I0913 00:24:49.463299 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2zns\" (UniqueName: \"kubernetes.io/projected/89c5ca25-5c32-4aa7-8832-b12816552076-kube-api-access-c2zns\") pod \"calico-apiserver-6c686fdbf4-6mwrv\" (UID: \"89c5ca25-5c32-4aa7-8832-b12816552076\") " pod="calico-apiserver/calico-apiserver-6c686fdbf4-6mwrv" Sep 13 00:24:49.463445 kubelet[2733]: I0913 00:24:49.463315 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddz2n\" (UniqueName: \"kubernetes.io/projected/c24bc14a-01c5-4568-b35c-bcb58b6575e4-kube-api-access-ddz2n\") pod \"coredns-668d6bf9bc-qh5km\" (UID: \"c24bc14a-01c5-4568-b35c-bcb58b6575e4\") " pod="kube-system/coredns-668d6bf9bc-qh5km" Sep 13 00:24:49.463445 kubelet[2733]: I0913 00:24:49.463329 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-whisker-backend-key-pair\") pod \"whisker-659f8f94c5-759bw\" (UID: \"2e8d886d-fbcb-4799-a16f-a0e63452b1a6\") " pod="calico-system/whisker-659f8f94c5-759bw" Sep 13 00:24:49.463445 kubelet[2733]: I0913 00:24:49.463363 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8kt8\" (UniqueName: \"kubernetes.io/projected/aba01603-30d6-40a1-a7ff-d2401c8a52b4-kube-api-access-r8kt8\") pod \"coredns-668d6bf9bc-5b5tl\" (UID: \"aba01603-30d6-40a1-a7ff-d2401c8a52b4\") " pod="kube-system/coredns-668d6bf9bc-5b5tl" Sep 13 00:24:49.463585 kubelet[2733]: I0913 00:24:49.463382 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aba01603-30d6-40a1-a7ff-d2401c8a52b4-config-volume\") pod \"coredns-668d6bf9bc-5b5tl\" (UID: \"aba01603-30d6-40a1-a7ff-d2401c8a52b4\") " pod="kube-system/coredns-668d6bf9bc-5b5tl" Sep 13 00:24:49.463585 kubelet[2733]: I0913 00:24:49.463400 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-whisker-ca-bundle\") pod \"whisker-659f8f94c5-759bw\" (UID: \"2e8d886d-fbcb-4799-a16f-a0e63452b1a6\") " pod="calico-system/whisker-659f8f94c5-759bw" Sep 13 00:24:49.463585 kubelet[2733]: I0913 00:24:49.463420 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/232536de-9922-44d0-aa94-90d23c7df90a-goldmane-key-pair\") pod \"goldmane-54d579b49d-wq4t8\" (UID: \"232536de-9922-44d0-aa94-90d23c7df90a\") " pod="calico-system/goldmane-54d579b49d-wq4t8" Sep 13 00:24:49.463585 kubelet[2733]: I0913 00:24:49.463443 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2be952e7-8835-4c16-a30d-a8e92050cd73-calico-apiserver-certs\") pod \"calico-apiserver-6c686fdbf4-mjkpg\" (UID: \"2be952e7-8835-4c16-a30d-a8e92050cd73\") " pod="calico-apiserver/calico-apiserver-6c686fdbf4-mjkpg" Sep 13 00:24:49.463585 kubelet[2733]: I0913 00:24:49.463467 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa9cc419-68f4-433f-aaf2-3ef2cf0d2915-tigera-ca-bundle\") pod \"calico-kube-controllers-6cdf4f8f49-b6h44\" (UID: \"aa9cc419-68f4-433f-aaf2-3ef2cf0d2915\") " pod="calico-system/calico-kube-controllers-6cdf4f8f49-b6h44" Sep 13 00:24:49.463708 kubelet[2733]: I0913 00:24:49.463493 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-554wg\" (UniqueName: \"kubernetes.io/projected/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-kube-api-access-554wg\") pod \"whisker-659f8f94c5-759bw\" (UID: \"2e8d886d-fbcb-4799-a16f-a0e63452b1a6\") " pod="calico-system/whisker-659f8f94c5-759bw" Sep 13 00:24:49.463708 kubelet[2733]: I0913 00:24:49.463508 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c24bc14a-01c5-4568-b35c-bcb58b6575e4-config-volume\") pod \"coredns-668d6bf9bc-qh5km\" (UID: \"c24bc14a-01c5-4568-b35c-bcb58b6575e4\") " pod="kube-system/coredns-668d6bf9bc-qh5km" Sep 13 00:24:49.463708 kubelet[2733]: I0913 00:24:49.463528 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232536de-9922-44d0-aa94-90d23c7df90a-config\") pod \"goldmane-54d579b49d-wq4t8\" (UID: \"232536de-9922-44d0-aa94-90d23c7df90a\") " pod="calico-system/goldmane-54d579b49d-wq4t8" Sep 13 00:24:49.463708 kubelet[2733]: I0913 00:24:49.463562 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xspn\" (UniqueName: \"kubernetes.io/projected/232536de-9922-44d0-aa94-90d23c7df90a-kube-api-access-7xspn\") pod \"goldmane-54d579b49d-wq4t8\" (UID: \"232536de-9922-44d0-aa94-90d23c7df90a\") " pod="calico-system/goldmane-54d579b49d-wq4t8" Sep 13 00:24:49.463708 kubelet[2733]: I0913 00:24:49.463584 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b7sv\" (UniqueName: \"kubernetes.io/projected/aa9cc419-68f4-433f-aaf2-3ef2cf0d2915-kube-api-access-7b7sv\") pod \"calico-kube-controllers-6cdf4f8f49-b6h44\" (UID: \"aa9cc419-68f4-433f-aaf2-3ef2cf0d2915\") " pod="calico-system/calico-kube-controllers-6cdf4f8f49-b6h44" Sep 13 00:24:49.632074 systemd[1]: Created slice kubepods-besteffort-pod011904af_1a84_4b2a_90be_5056b5973c41.slice - libcontainer container kubepods-besteffort-pod011904af_1a84_4b2a_90be_5056b5973c41.slice. Sep 13 00:24:49.634849 containerd[1594]: time="2025-09-13T00:24:49.634775942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh9k4,Uid:011904af-1a84-4b2a-90be-5056b5973c41,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:49.881671 containerd[1594]: time="2025-09-13T00:24:49.881611557Z" level=error msg="Failed to destroy network for sandbox \"a097409a4d00694323b5c14fef2c68e7f488671b974f49ccfd03818c9a9e80e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:49.905457 kubelet[2733]: E0913 00:24:49.905125 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:49.905896 containerd[1594]: time="2025-09-13T00:24:49.905858253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh5km,Uid:c24bc14a-01c5-4568-b35c-bcb58b6575e4,Namespace:kube-system,Attempt:0,}" Sep 13 00:24:49.905999 containerd[1594]: time="2025-09-13T00:24:49.905858344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cdf4f8f49-b6h44,Uid:aa9cc419-68f4-433f-aaf2-3ef2cf0d2915,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:49.906599 kubelet[2733]: E0913 00:24:49.906577 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:49.907039 containerd[1594]: time="2025-09-13T00:24:49.906875977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659f8f94c5-759bw,Uid:2e8d886d-fbcb-4799-a16f-a0e63452b1a6,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:49.907039 containerd[1594]: time="2025-09-13T00:24:49.906979742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5b5tl,Uid:aba01603-30d6-40a1-a7ff-d2401c8a52b4,Namespace:kube-system,Attempt:0,}" Sep 13 00:24:49.907432 containerd[1594]: time="2025-09-13T00:24:49.907294504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wq4t8,Uid:232536de-9922-44d0-aa94-90d23c7df90a,Namespace:calico-system,Attempt:0,}" Sep 13 00:24:49.908847 containerd[1594]: time="2025-09-13T00:24:49.908814573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c686fdbf4-6mwrv,Uid:89c5ca25-5c32-4aa7-8832-b12816552076,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:24:49.911688 containerd[1594]: time="2025-09-13T00:24:49.911647609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c686fdbf4-mjkpg,Uid:2be952e7-8835-4c16-a30d-a8e92050cd73,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:24:49.915045 containerd[1594]: time="2025-09-13T00:24:49.914907410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh9k4,Uid:011904af-1a84-4b2a-90be-5056b5973c41,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a097409a4d00694323b5c14fef2c68e7f488671b974f49ccfd03818c9a9e80e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:49.915228 kubelet[2733]: E0913 00:24:49.915127 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a097409a4d00694323b5c14fef2c68e7f488671b974f49ccfd03818c9a9e80e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:49.915228 kubelet[2733]: E0913 00:24:49.915213 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a097409a4d00694323b5c14fef2c68e7f488671b974f49ccfd03818c9a9e80e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zh9k4" Sep 13 00:24:49.915312 kubelet[2733]: E0913 00:24:49.915250 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a097409a4d00694323b5c14fef2c68e7f488671b974f49ccfd03818c9a9e80e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zh9k4" Sep 13 00:24:49.915357 kubelet[2733]: E0913 00:24:49.915315 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zh9k4_calico-system(011904af-1a84-4b2a-90be-5056b5973c41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zh9k4_calico-system(011904af-1a84-4b2a-90be-5056b5973c41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a097409a4d00694323b5c14fef2c68e7f488671b974f49ccfd03818c9a9e80e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zh9k4" podUID="011904af-1a84-4b2a-90be-5056b5973c41" Sep 13 00:24:50.037239 containerd[1594]: time="2025-09-13T00:24:50.037083901Z" level=error msg="Failed to destroy network for sandbox \"ba219301faa07d7b416905166579528079c416094079d4a6dbe6b67be8f33f1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.038648 containerd[1594]: time="2025-09-13T00:24:50.038617034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cdf4f8f49-b6h44,Uid:aa9cc419-68f4-433f-aaf2-3ef2cf0d2915,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba219301faa07d7b416905166579528079c416094079d4a6dbe6b67be8f33f1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.039055 kubelet[2733]: E0913 00:24:50.039010 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba219301faa07d7b416905166579528079c416094079d4a6dbe6b67be8f33f1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.039173 kubelet[2733]: E0913 00:24:50.039156 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba219301faa07d7b416905166579528079c416094079d4a6dbe6b67be8f33f1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cdf4f8f49-b6h44" Sep 13 00:24:50.039285 kubelet[2733]: E0913 00:24:50.039269 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba219301faa07d7b416905166579528079c416094079d4a6dbe6b67be8f33f1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6cdf4f8f49-b6h44" Sep 13 00:24:50.039416 kubelet[2733]: E0913 00:24:50.039384 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6cdf4f8f49-b6h44_calico-system(aa9cc419-68f4-433f-aaf2-3ef2cf0d2915)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6cdf4f8f49-b6h44_calico-system(aa9cc419-68f4-433f-aaf2-3ef2cf0d2915)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba219301faa07d7b416905166579528079c416094079d4a6dbe6b67be8f33f1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6cdf4f8f49-b6h44" podUID="aa9cc419-68f4-433f-aaf2-3ef2cf0d2915" Sep 13 00:24:50.050538 containerd[1594]: time="2025-09-13T00:24:50.050013852Z" level=error msg="Failed to destroy network for sandbox \"43cc71f300c3a258e735c1b08f8deea59736fbd3b309c5ba0c263975be3a5fb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.053008 containerd[1594]: time="2025-09-13T00:24:50.052949221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-659f8f94c5-759bw,Uid:2e8d886d-fbcb-4799-a16f-a0e63452b1a6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cc71f300c3a258e735c1b08f8deea59736fbd3b309c5ba0c263975be3a5fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.053255 kubelet[2733]: E0913 00:24:50.053205 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cc71f300c3a258e735c1b08f8deea59736fbd3b309c5ba0c263975be3a5fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.053311 kubelet[2733]: E0913 00:24:50.053289 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cc71f300c3a258e735c1b08f8deea59736fbd3b309c5ba0c263975be3a5fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-659f8f94c5-759bw" Sep 13 00:24:50.053339 kubelet[2733]: E0913 00:24:50.053320 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43cc71f300c3a258e735c1b08f8deea59736fbd3b309c5ba0c263975be3a5fb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-659f8f94c5-759bw" Sep 13 00:24:50.053396 kubelet[2733]: E0913 00:24:50.053367 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-659f8f94c5-759bw_calico-system(2e8d886d-fbcb-4799-a16f-a0e63452b1a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-659f8f94c5-759bw_calico-system(2e8d886d-fbcb-4799-a16f-a0e63452b1a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43cc71f300c3a258e735c1b08f8deea59736fbd3b309c5ba0c263975be3a5fb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-659f8f94c5-759bw" podUID="2e8d886d-fbcb-4799-a16f-a0e63452b1a6" Sep 13 00:24:50.065044 containerd[1594]: time="2025-09-13T00:24:50.064982506Z" level=error msg="Failed to destroy network for sandbox \"5d99b4e7ba808cd812e36301d60eb2df01aa3a23ceb10b350c29ca748354655b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.065259 containerd[1594]: time="2025-09-13T00:24:50.065239799Z" level=error msg="Failed to destroy network for sandbox \"4ad59f8afea1a5ac6116be838ba73e78685bfdced004c29b966295b5a43e28d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.068395 containerd[1594]: time="2025-09-13T00:24:50.068354195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c686fdbf4-mjkpg,Uid:2be952e7-8835-4c16-a30d-a8e92050cd73,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d99b4e7ba808cd812e36301d60eb2df01aa3a23ceb10b350c29ca748354655b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.068957 kubelet[2733]: E0913 00:24:50.068781 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d99b4e7ba808cd812e36301d60eb2df01aa3a23ceb10b350c29ca748354655b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.068957 kubelet[2733]: E0913 00:24:50.068896 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d99b4e7ba808cd812e36301d60eb2df01aa3a23ceb10b350c29ca748354655b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c686fdbf4-mjkpg" Sep 13 00:24:50.069262 kubelet[2733]: E0913 00:24:50.068916 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d99b4e7ba808cd812e36301d60eb2df01aa3a23ceb10b350c29ca748354655b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c686fdbf4-mjkpg" Sep 13 00:24:50.069262 kubelet[2733]: E0913 00:24:50.069097 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c686fdbf4-mjkpg_calico-apiserver(2be952e7-8835-4c16-a30d-a8e92050cd73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c686fdbf4-mjkpg_calico-apiserver(2be952e7-8835-4c16-a30d-a8e92050cd73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d99b4e7ba808cd812e36301d60eb2df01aa3a23ceb10b350c29ca748354655b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c686fdbf4-mjkpg" podUID="2be952e7-8835-4c16-a30d-a8e92050cd73" Sep 13 00:24:50.071206 containerd[1594]: time="2025-09-13T00:24:50.071163026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh5km,Uid:c24bc14a-01c5-4568-b35c-bcb58b6575e4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad59f8afea1a5ac6116be838ba73e78685bfdced004c29b966295b5a43e28d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.072088 kubelet[2733]: E0913 00:24:50.071374 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad59f8afea1a5ac6116be838ba73e78685bfdced004c29b966295b5a43e28d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.072088 kubelet[2733]: E0913 00:24:50.071875 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad59f8afea1a5ac6116be838ba73e78685bfdced004c29b966295b5a43e28d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh5km" Sep 13 00:24:50.072088 kubelet[2733]: E0913 00:24:50.071923 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ad59f8afea1a5ac6116be838ba73e78685bfdced004c29b966295b5a43e28d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh5km" Sep 13 00:24:50.072265 kubelet[2733]: E0913 00:24:50.072022 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qh5km_kube-system(c24bc14a-01c5-4568-b35c-bcb58b6575e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qh5km_kube-system(c24bc14a-01c5-4568-b35c-bcb58b6575e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ad59f8afea1a5ac6116be838ba73e78685bfdced004c29b966295b5a43e28d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qh5km" podUID="c24bc14a-01c5-4568-b35c-bcb58b6575e4" Sep 13 00:24:50.073978 containerd[1594]: time="2025-09-13T00:24:50.073923516Z" level=error msg="Failed to destroy network for sandbox \"7534a6004d1b313ec103c99a2d07d251d4a45f8240cb517c66ca11d822097b7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.077894 containerd[1594]: time="2025-09-13T00:24:50.077840912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5b5tl,Uid:aba01603-30d6-40a1-a7ff-d2401c8a52b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7534a6004d1b313ec103c99a2d07d251d4a45f8240cb517c66ca11d822097b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.078225 kubelet[2733]: E0913 00:24:50.078126 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7534a6004d1b313ec103c99a2d07d251d4a45f8240cb517c66ca11d822097b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.078225 kubelet[2733]: E0913 00:24:50.078205 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7534a6004d1b313ec103c99a2d07d251d4a45f8240cb517c66ca11d822097b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5b5tl" Sep 13 00:24:50.078321 kubelet[2733]: E0913 00:24:50.078228 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7534a6004d1b313ec103c99a2d07d251d4a45f8240cb517c66ca11d822097b7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5b5tl" Sep 13 00:24:50.078321 kubelet[2733]: E0913 00:24:50.078280 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5b5tl_kube-system(aba01603-30d6-40a1-a7ff-d2401c8a52b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5b5tl_kube-system(aba01603-30d6-40a1-a7ff-d2401c8a52b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7534a6004d1b313ec103c99a2d07d251d4a45f8240cb517c66ca11d822097b7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5b5tl" podUID="aba01603-30d6-40a1-a7ff-d2401c8a52b4" Sep 13 00:24:50.079641 containerd[1594]: time="2025-09-13T00:24:50.079598958Z" level=error msg="Failed to destroy network for sandbox \"0cefc205b45260982d588247652f25f3ca800e82b7460a9c870d43c5c12e38fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.081591 containerd[1594]: time="2025-09-13T00:24:50.081474264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c686fdbf4-6mwrv,Uid:89c5ca25-5c32-4aa7-8832-b12816552076,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cefc205b45260982d588247652f25f3ca800e82b7460a9c870d43c5c12e38fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.081720 kubelet[2733]: E0913 00:24:50.081688 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cefc205b45260982d588247652f25f3ca800e82b7460a9c870d43c5c12e38fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.081818 kubelet[2733]: E0913 00:24:50.081739 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cefc205b45260982d588247652f25f3ca800e82b7460a9c870d43c5c12e38fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c686fdbf4-6mwrv" Sep 13 00:24:50.081818 kubelet[2733]: E0913 00:24:50.081757 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cefc205b45260982d588247652f25f3ca800e82b7460a9c870d43c5c12e38fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c686fdbf4-6mwrv" Sep 13 00:24:50.081920 kubelet[2733]: E0913 00:24:50.081873 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c686fdbf4-6mwrv_calico-apiserver(89c5ca25-5c32-4aa7-8832-b12816552076)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c686fdbf4-6mwrv_calico-apiserver(89c5ca25-5c32-4aa7-8832-b12816552076)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cefc205b45260982d588247652f25f3ca800e82b7460a9c870d43c5c12e38fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c686fdbf4-6mwrv" podUID="89c5ca25-5c32-4aa7-8832-b12816552076" Sep 13 00:24:50.087113 containerd[1594]: time="2025-09-13T00:24:50.086975516Z" level=error msg="Failed to destroy network for sandbox \"e3c03b318f1704d3ad89efb3b50c42a78cd8120d93effa4f634e116285656146\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.088558 containerd[1594]: time="2025-09-13T00:24:50.088507918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wq4t8,Uid:232536de-9922-44d0-aa94-90d23c7df90a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c03b318f1704d3ad89efb3b50c42a78cd8120d93effa4f634e116285656146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.088865 kubelet[2733]: E0913 00:24:50.088741 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c03b318f1704d3ad89efb3b50c42a78cd8120d93effa4f634e116285656146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:24:50.088929 kubelet[2733]: E0913 00:24:50.088902 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c03b318f1704d3ad89efb3b50c42a78cd8120d93effa4f634e116285656146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-wq4t8" Sep 13 00:24:50.088986 kubelet[2733]: E0913 00:24:50.088935 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3c03b318f1704d3ad89efb3b50c42a78cd8120d93effa4f634e116285656146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-wq4t8" Sep 13 00:24:50.089031 kubelet[2733]: E0913 00:24:50.089002 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-wq4t8_calico-system(232536de-9922-44d0-aa94-90d23c7df90a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-wq4t8_calico-system(232536de-9922-44d0-aa94-90d23c7df90a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3c03b318f1704d3ad89efb3b50c42a78cd8120d93effa4f634e116285656146\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-wq4t8" podUID="232536de-9922-44d0-aa94-90d23c7df90a" Sep 13 00:24:50.686845 containerd[1594]: time="2025-09-13T00:24:50.686768536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:24:53.529637 kubelet[2733]: I0913 00:24:53.529571 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:24:53.530220 kubelet[2733]: E0913 00:24:53.530094 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:53.692454 kubelet[2733]: E0913 00:24:53.692390 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:24:59.467956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583750298.mount: Deactivated successfully. Sep 13 00:25:00.609275 kubelet[2733]: E0913 00:25:00.609240 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:00.610172 containerd[1594]: time="2025-09-13T00:25:00.609629119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh5km,Uid:c24bc14a-01c5-4568-b35c-bcb58b6575e4,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:01.330645 containerd[1594]: time="2025-09-13T00:25:01.330573057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:01.333814 containerd[1594]: time="2025-09-13T00:25:01.333181155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:25:01.347914 containerd[1594]: time="2025-09-13T00:25:01.347834279Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:01.353099 containerd[1594]: time="2025-09-13T00:25:01.352751753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:01.354630 containerd[1594]: time="2025-09-13T00:25:01.354585325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.667743081s" Sep 13 00:25:01.354630 containerd[1594]: time="2025-09-13T00:25:01.354632494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:25:01.374095 containerd[1594]: time="2025-09-13T00:25:01.374013988Z" level=info msg="CreateContainer within sandbox \"9fa63d3de2f32288e608f9778f8f59cb3c5902954d5d3c3bb79722ba616a1207\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:25:01.390650 containerd[1594]: time="2025-09-13T00:25:01.390578080Z" level=error msg="Failed to destroy network for sandbox \"aa64a0471415980f6d0bf71538303492c4f56cde7ea4bf92acaa6790ad691626\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:25:01.393167 systemd[1]: run-netns-cni\x2d0ba8a5f2\x2d6a6d\x2df3e3\x2d40ad\x2da466fdc54495.mount: Deactivated successfully. Sep 13 00:25:01.397164 containerd[1594]: time="2025-09-13T00:25:01.397080680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh5km,Uid:c24bc14a-01c5-4568-b35c-bcb58b6575e4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa64a0471415980f6d0bf71538303492c4f56cde7ea4bf92acaa6790ad691626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:25:01.397559 kubelet[2733]: E0913 00:25:01.397513 2733 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa64a0471415980f6d0bf71538303492c4f56cde7ea4bf92acaa6790ad691626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:25:01.397641 kubelet[2733]: E0913 00:25:01.397583 2733 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa64a0471415980f6d0bf71538303492c4f56cde7ea4bf92acaa6790ad691626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh5km" Sep 13 00:25:01.397641 kubelet[2733]: E0913 00:25:01.397604 2733 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa64a0471415980f6d0bf71538303492c4f56cde7ea4bf92acaa6790ad691626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qh5km" Sep 13 00:25:01.397695 kubelet[2733]: E0913 00:25:01.397647 2733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qh5km_kube-system(c24bc14a-01c5-4568-b35c-bcb58b6575e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qh5km_kube-system(c24bc14a-01c5-4568-b35c-bcb58b6575e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa64a0471415980f6d0bf71538303492c4f56cde7ea4bf92acaa6790ad691626\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qh5km" podUID="c24bc14a-01c5-4568-b35c-bcb58b6575e4" Sep 13 00:25:01.402467 containerd[1594]: time="2025-09-13T00:25:01.402415377Z" level=info msg="Container 402657c57ad2d40d55213b44d0500c19c93d8b60c1182b2a1d2b4c3c4152f7ee: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:01.424730 containerd[1594]: time="2025-09-13T00:25:01.424659526Z" level=info msg="CreateContainer within sandbox \"9fa63d3de2f32288e608f9778f8f59cb3c5902954d5d3c3bb79722ba616a1207\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"402657c57ad2d40d55213b44d0500c19c93d8b60c1182b2a1d2b4c3c4152f7ee\"" Sep 13 00:25:01.425271 containerd[1594]: time="2025-09-13T00:25:01.425233063Z" level=info msg="StartContainer for \"402657c57ad2d40d55213b44d0500c19c93d8b60c1182b2a1d2b4c3c4152f7ee\"" Sep 13 00:25:01.426706 containerd[1594]: time="2025-09-13T00:25:01.426681022Z" level=info msg="connecting to shim 402657c57ad2d40d55213b44d0500c19c93d8b60c1182b2a1d2b4c3c4152f7ee" address="unix:///run/containerd/s/9bf44e24f86a6ddca644f9d67719111f872b07b6c44aec5609b6a8ed59552558" protocol=ttrpc version=3 Sep 13 00:25:01.445086 systemd[1]: Started cri-containerd-402657c57ad2d40d55213b44d0500c19c93d8b60c1182b2a1d2b4c3c4152f7ee.scope - libcontainer container 402657c57ad2d40d55213b44d0500c19c93d8b60c1182b2a1d2b4c3c4152f7ee. Sep 13 00:25:01.504095 containerd[1594]: time="2025-09-13T00:25:01.504034851Z" level=info msg="StartContainer for \"402657c57ad2d40d55213b44d0500c19c93d8b60c1182b2a1d2b4c3c4152f7ee\" returns successfully" Sep 13 00:25:01.588708 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:25:01.589668 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:25:01.742826 kubelet[2733]: I0913 00:25:01.742093 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tlfp2" podStartSLOduration=1.363418218 podStartE2EDuration="30.742067772s" podCreationTimestamp="2025-09-13 00:24:31 +0000 UTC" firstStartedPulling="2025-09-13 00:24:31.977212539 +0000 UTC m=+20.449695424" lastFinishedPulling="2025-09-13 00:25:01.355862093 +0000 UTC m=+49.828344978" observedRunningTime="2025-09-13 00:25:01.741598161 +0000 UTC m=+50.214081046" watchObservedRunningTime="2025-09-13 00:25:01.742067772 +0000 UTC m=+50.214550657" Sep 13 00:25:01.743460 kubelet[2733]: I0913 00:25:01.743024 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-whisker-ca-bundle\") pod \"2e8d886d-fbcb-4799-a16f-a0e63452b1a6\" (UID: \"2e8d886d-fbcb-4799-a16f-a0e63452b1a6\") " Sep 13 00:25:01.743460 kubelet[2733]: I0913 00:25:01.743083 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-whisker-backend-key-pair\") pod \"2e8d886d-fbcb-4799-a16f-a0e63452b1a6\" (UID: \"2e8d886d-fbcb-4799-a16f-a0e63452b1a6\") " Sep 13 00:25:01.743460 kubelet[2733]: I0913 00:25:01.743107 2733 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-554wg\" (UniqueName: \"kubernetes.io/projected/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-kube-api-access-554wg\") pod \"2e8d886d-fbcb-4799-a16f-a0e63452b1a6\" (UID: \"2e8d886d-fbcb-4799-a16f-a0e63452b1a6\") " Sep 13 00:25:01.751577 kubelet[2733]: I0913 00:25:01.751426 2733 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2e8d886d-fbcb-4799-a16f-a0e63452b1a6" (UID: "2e8d886d-fbcb-4799-a16f-a0e63452b1a6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:25:01.753770 kubelet[2733]: I0913 00:25:01.753143 2733 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2e8d886d-fbcb-4799-a16f-a0e63452b1a6" (UID: "2e8d886d-fbcb-4799-a16f-a0e63452b1a6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:25:01.756288 kubelet[2733]: I0913 00:25:01.756241 2733 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-kube-api-access-554wg" (OuterVolumeSpecName: "kube-api-access-554wg") pod "2e8d886d-fbcb-4799-a16f-a0e63452b1a6" (UID: "2e8d886d-fbcb-4799-a16f-a0e63452b1a6"). InnerVolumeSpecName "kube-api-access-554wg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:25:01.846043 kubelet[2733]: I0913 00:25:01.845999 2733 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 13 00:25:01.846043 kubelet[2733]: I0913 00:25:01.846039 2733 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 13 00:25:01.846043 kubelet[2733]: I0913 00:25:01.846051 2733 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-554wg\" (UniqueName: \"kubernetes.io/projected/2e8d886d-fbcb-4799-a16f-a0e63452b1a6-kube-api-access-554wg\") on node \"localhost\" DevicePath \"\"" Sep 13 00:25:02.026686 systemd[1]: Removed slice kubepods-besteffort-pod2e8d886d_fbcb_4799_a16f_a0e63452b1a6.slice - libcontainer container kubepods-besteffort-pod2e8d886d_fbcb_4799_a16f_a0e63452b1a6.slice. Sep 13 00:25:02.243997 systemd[1]: Created slice kubepods-besteffort-poda939d331_c795_43ac_a495_5744af638db4.slice - libcontainer container kubepods-besteffort-poda939d331_c795_43ac_a495_5744af638db4.slice. Sep 13 00:25:02.249184 kubelet[2733]: I0913 00:25:02.249133 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a939d331-c795-43ac-a495-5744af638db4-whisker-backend-key-pair\") pod \"whisker-d44446994-d9vnz\" (UID: \"a939d331-c795-43ac-a495-5744af638db4\") " pod="calico-system/whisker-d44446994-d9vnz" Sep 13 00:25:02.249184 kubelet[2733]: I0913 00:25:02.249188 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zlwc\" (UniqueName: \"kubernetes.io/projected/a939d331-c795-43ac-a495-5744af638db4-kube-api-access-6zlwc\") pod \"whisker-d44446994-d9vnz\" (UID: \"a939d331-c795-43ac-a495-5744af638db4\") " pod="calico-system/whisker-d44446994-d9vnz" Sep 13 00:25:02.249425 kubelet[2733]: I0913 00:25:02.249215 2733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a939d331-c795-43ac-a495-5744af638db4-whisker-ca-bundle\") pod \"whisker-d44446994-d9vnz\" (UID: \"a939d331-c795-43ac-a495-5744af638db4\") " pod="calico-system/whisker-d44446994-d9vnz" Sep 13 00:25:02.315583 systemd[1]: var-lib-kubelet-pods-2e8d886d\x2dfbcb\x2d4799\x2da16f\x2da0e63452b1a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d554wg.mount: Deactivated successfully. Sep 13 00:25:02.315730 systemd[1]: var-lib-kubelet-pods-2e8d886d\x2dfbcb\x2d4799\x2da16f\x2da0e63452b1a6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:25:02.549983 containerd[1594]: time="2025-09-13T00:25:02.549924888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d44446994-d9vnz,Uid:a939d331-c795-43ac-a495-5744af638db4,Namespace:calico-system,Attempt:0,}" Sep 13 00:25:02.609591 containerd[1594]: time="2025-09-13T00:25:02.609541163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wq4t8,Uid:232536de-9922-44d0-aa94-90d23c7df90a,Namespace:calico-system,Attempt:0,}" Sep 13 00:25:02.609767 containerd[1594]: time="2025-09-13T00:25:02.609604001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cdf4f8f49-b6h44,Uid:aa9cc419-68f4-433f-aaf2-3ef2cf0d2915,Namespace:calico-system,Attempt:0,}" Sep 13 00:25:02.740970 systemd-networkd[1503]: caliee24b71409c: Link UP Sep 13 00:25:02.741556 systemd-networkd[1503]: caliee24b71409c: Gained carrier Sep 13 00:25:02.759219 containerd[1594]: 2025-09-13 00:25:02.571 [INFO][3966] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:25:02.759219 containerd[1594]: 2025-09-13 00:25:02.589 [INFO][3966] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--d44446994--d9vnz-eth0 whisker-d44446994- calico-system a939d331-c795-43ac-a495-5744af638db4 964 0 2025-09-13 00:25:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:d44446994 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-d44446994-d9vnz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliee24b71409c [] [] }} ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Namespace="calico-system" Pod="whisker-d44446994-d9vnz" WorkloadEndpoint="localhost-k8s-whisker--d44446994--d9vnz-" Sep 13 00:25:02.759219 containerd[1594]: 2025-09-13 00:25:02.589 [INFO][3966] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Namespace="calico-system" Pod="whisker-d44446994-d9vnz" WorkloadEndpoint="localhost-k8s-whisker--d44446994--d9vnz-eth0" Sep 13 00:25:02.759219 containerd[1594]: 2025-09-13 00:25:02.671 [INFO][3979] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" HandleID="k8s-pod-network.e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Workload="localhost-k8s-whisker--d44446994--d9vnz-eth0" Sep 13 00:25:02.759478 containerd[1594]: 2025-09-13 00:25:02.672 [INFO][3979] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" HandleID="k8s-pod-network.e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Workload="localhost-k8s-whisker--d44446994--d9vnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a3720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-d44446994-d9vnz", "timestamp":"2025-09-13 00:25:02.671503292 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:25:02.759478 containerd[1594]: 2025-09-13 00:25:02.673 [INFO][3979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:02.759478 containerd[1594]: 2025-09-13 00:25:02.673 [INFO][3979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:02.759478 containerd[1594]: 2025-09-13 00:25:02.674 [INFO][3979] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:25:02.759478 containerd[1594]: 2025-09-13 00:25:02.684 [INFO][3979] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" host="localhost" Sep 13 00:25:02.759478 containerd[1594]: 2025-09-13 00:25:02.694 [INFO][3979] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:25:02.759478 containerd[1594]: 2025-09-13 00:25:02.701 [INFO][3979] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:25:02.759478 containerd[1594]: 2025-09-13 00:25:02.703 [INFO][3979] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:02.759478 containerd[1594]: 2025-09-13 00:25:02.706 [INFO][3979] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:02.759478 containerd[1594]: 2025-09-13 00:25:02.706 [INFO][3979] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" host="localhost" Sep 13 00:25:02.759722 containerd[1594]: 2025-09-13 00:25:02.707 [INFO][3979] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a Sep 13 00:25:02.759722 containerd[1594]: 2025-09-13 00:25:02.711 [INFO][3979] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" host="localhost" Sep 13 00:25:02.759722 containerd[1594]: 2025-09-13 00:25:02.716 [INFO][3979] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" host="localhost" Sep 13 00:25:02.759722 containerd[1594]: 2025-09-13 00:25:02.716 [INFO][3979] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" host="localhost" Sep 13 00:25:02.759722 containerd[1594]: 2025-09-13 00:25:02.716 [INFO][3979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:02.759722 containerd[1594]: 2025-09-13 00:25:02.716 [INFO][3979] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" HandleID="k8s-pod-network.e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Workload="localhost-k8s-whisker--d44446994--d9vnz-eth0" Sep 13 00:25:02.759883 containerd[1594]: 2025-09-13 00:25:02.722 [INFO][3966] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Namespace="calico-system" Pod="whisker-d44446994-d9vnz" WorkloadEndpoint="localhost-k8s-whisker--d44446994--d9vnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--d44446994--d9vnz-eth0", GenerateName:"whisker-d44446994-", Namespace:"calico-system", SelfLink:"", UID:"a939d331-c795-43ac-a495-5744af638db4", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d44446994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-d44446994-d9vnz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliee24b71409c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:02.759883 containerd[1594]: 2025-09-13 00:25:02.723 [INFO][3966] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Namespace="calico-system" Pod="whisker-d44446994-d9vnz" WorkloadEndpoint="localhost-k8s-whisker--d44446994--d9vnz-eth0" Sep 13 00:25:02.759962 containerd[1594]: 2025-09-13 00:25:02.723 [INFO][3966] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee24b71409c ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Namespace="calico-system" Pod="whisker-d44446994-d9vnz" WorkloadEndpoint="localhost-k8s-whisker--d44446994--d9vnz-eth0" Sep 13 00:25:02.759962 containerd[1594]: 2025-09-13 00:25:02.742 [INFO][3966] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Namespace="calico-system" Pod="whisker-d44446994-d9vnz" WorkloadEndpoint="localhost-k8s-whisker--d44446994--d9vnz-eth0" Sep 13 00:25:02.760009 containerd[1594]: 2025-09-13 00:25:02.743 [INFO][3966] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Namespace="calico-system" Pod="whisker-d44446994-d9vnz" WorkloadEndpoint="localhost-k8s-whisker--d44446994--d9vnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--d44446994--d9vnz-eth0", GenerateName:"whisker-d44446994-", Namespace:"calico-system", SelfLink:"", UID:"a939d331-c795-43ac-a495-5744af638db4", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 25, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"d44446994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a", Pod:"whisker-d44446994-d9vnz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliee24b71409c", MAC:"32:ea:fe:76:80:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:02.760058 containerd[1594]: 2025-09-13 00:25:02.755 [INFO][3966] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" Namespace="calico-system" Pod="whisker-d44446994-d9vnz" WorkloadEndpoint="localhost-k8s-whisker--d44446994--d9vnz-eth0" Sep 13 00:25:03.300376 systemd-networkd[1503]: cali1180c996d00: Link UP Sep 13 00:25:03.301447 systemd-networkd[1503]: cali1180c996d00: Gained carrier Sep 13 00:25:03.425317 containerd[1594]: 2025-09-13 00:25:02.648 [INFO][3991] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:25:03.425317 containerd[1594]: 2025-09-13 00:25:02.662 [INFO][3991] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0 calico-kube-controllers-6cdf4f8f49- calico-system aa9cc419-68f4-433f-aaf2-3ef2cf0d2915 875 0 2025-09-13 00:24:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6cdf4f8f49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6cdf4f8f49-b6h44 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1180c996d00 [] [] }} ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Namespace="calico-system" Pod="calico-kube-controllers-6cdf4f8f49-b6h44" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-" Sep 13 00:25:03.425317 containerd[1594]: 2025-09-13 00:25:02.662 [INFO][3991] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Namespace="calico-system" Pod="calico-kube-controllers-6cdf4f8f49-b6h44" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0" Sep 13 00:25:03.425317 containerd[1594]: 2025-09-13 00:25:02.704 [INFO][4014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" HandleID="k8s-pod-network.5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Workload="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0" Sep 13 00:25:03.425606 containerd[1594]: 2025-09-13 00:25:02.704 [INFO][4014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" HandleID="k8s-pod-network.5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Workload="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6cdf4f8f49-b6h44", "timestamp":"2025-09-13 00:25:02.704115444 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:25:03.425606 containerd[1594]: 2025-09-13 00:25:02.704 [INFO][4014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:03.425606 containerd[1594]: 2025-09-13 00:25:02.716 [INFO][4014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:03.425606 containerd[1594]: 2025-09-13 00:25:02.716 [INFO][4014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:25:03.425606 containerd[1594]: 2025-09-13 00:25:02.782 [INFO][4014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" host="localhost" Sep 13 00:25:03.425606 containerd[1594]: 2025-09-13 00:25:03.074 [INFO][4014] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:25:03.425606 containerd[1594]: 2025-09-13 00:25:03.080 [INFO][4014] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:25:03.425606 containerd[1594]: 2025-09-13 00:25:03.082 [INFO][4014] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:03.425606 containerd[1594]: 2025-09-13 00:25:03.084 [INFO][4014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:03.425606 containerd[1594]: 2025-09-13 00:25:03.084 [INFO][4014] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" host="localhost" Sep 13 00:25:03.425947 containerd[1594]: 2025-09-13 00:25:03.086 [INFO][4014] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663 Sep 13 00:25:03.425947 containerd[1594]: 2025-09-13 00:25:03.259 [INFO][4014] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" host="localhost" Sep 13 00:25:03.425947 containerd[1594]: 2025-09-13 00:25:03.294 [INFO][4014] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" host="localhost" Sep 13 00:25:03.425947 containerd[1594]: 2025-09-13 00:25:03.294 [INFO][4014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" host="localhost" Sep 13 00:25:03.425947 containerd[1594]: 2025-09-13 00:25:03.294 [INFO][4014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:03.425947 containerd[1594]: 2025-09-13 00:25:03.294 [INFO][4014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" HandleID="k8s-pod-network.5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Workload="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0" Sep 13 00:25:03.426118 containerd[1594]: 2025-09-13 00:25:03.297 [INFO][3991] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Namespace="calico-system" Pod="calico-kube-controllers-6cdf4f8f49-b6h44" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0", GenerateName:"calico-kube-controllers-6cdf4f8f49-", Namespace:"calico-system", SelfLink:"", UID:"aa9cc419-68f4-433f-aaf2-3ef2cf0d2915", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cdf4f8f49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6cdf4f8f49-b6h44", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1180c996d00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:03.426184 containerd[1594]: 2025-09-13 00:25:03.297 [INFO][3991] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Namespace="calico-system" Pod="calico-kube-controllers-6cdf4f8f49-b6h44" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0" Sep 13 00:25:03.426184 containerd[1594]: 2025-09-13 00:25:03.297 [INFO][3991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1180c996d00 ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Namespace="calico-system" Pod="calico-kube-controllers-6cdf4f8f49-b6h44" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0" Sep 13 00:25:03.426184 containerd[1594]: 2025-09-13 00:25:03.302 [INFO][3991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Namespace="calico-system" Pod="calico-kube-controllers-6cdf4f8f49-b6h44" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0" Sep 13 00:25:03.426269 containerd[1594]: 2025-09-13 00:25:03.303 [INFO][3991] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Namespace="calico-system" Pod="calico-kube-controllers-6cdf4f8f49-b6h44" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0", GenerateName:"calico-kube-controllers-6cdf4f8f49-", Namespace:"calico-system", SelfLink:"", UID:"aa9cc419-68f4-433f-aaf2-3ef2cf0d2915", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6cdf4f8f49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663", Pod:"calico-kube-controllers-6cdf4f8f49-b6h44", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1180c996d00", MAC:"6e:a3:47:e6:c8:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:03.426342 containerd[1594]: 2025-09-13 00:25:03.406 [INFO][3991] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" Namespace="calico-system" Pod="calico-kube-controllers-6cdf4f8f49-b6h44" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6cdf4f8f49--b6h44-eth0" Sep 13 00:25:03.481313 systemd-networkd[1503]: cali0e884b823a5: Link UP Sep 13 00:25:03.491212 systemd-networkd[1503]: cali0e884b823a5: Gained carrier Sep 13 00:25:03.612764 kubelet[2733]: E0913 00:25:03.612704 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:03.618149 containerd[1594]: time="2025-09-13T00:25:03.617690790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5b5tl,Uid:aba01603-30d6-40a1-a7ff-d2401c8a52b4,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:03.711220 containerd[1594]: time="2025-09-13T00:25:03.618525366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh9k4,Uid:011904af-1a84-4b2a-90be-5056b5973c41,Namespace:calico-system,Attempt:0,}" Sep 13 00:25:03.711220 containerd[1594]: time="2025-09-13T00:25:03.620444750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c686fdbf4-mjkpg,Uid:2be952e7-8835-4c16-a30d-a8e92050cd73,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:25:03.711220 containerd[1594]: time="2025-09-13T00:25:03.620756085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c686fdbf4-6mwrv,Uid:89c5ca25-5c32-4aa7-8832-b12816552076,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:25:03.711220 containerd[1594]: 2025-09-13 00:25:02.649 [INFO][3984] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:25:03.711220 containerd[1594]: 2025-09-13 00:25:02.662 [INFO][3984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--wq4t8-eth0 goldmane-54d579b49d- calico-system 232536de-9922-44d0-aa94-90d23c7df90a 879 0 2025-09-13 00:24:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-wq4t8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0e884b823a5 [] [] }} ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Namespace="calico-system" Pod="goldmane-54d579b49d-wq4t8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wq4t8-" Sep 13 00:25:03.711220 containerd[1594]: 2025-09-13 00:25:02.662 [INFO][3984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Namespace="calico-system" Pod="goldmane-54d579b49d-wq4t8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wq4t8-eth0" Sep 13 00:25:03.711963 kubelet[2733]: I0913 00:25:03.625383 2733 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e8d886d-fbcb-4799-a16f-a0e63452b1a6" path="/var/lib/kubelet/pods/2e8d886d-fbcb-4799-a16f-a0e63452b1a6/volumes" Sep 13 00:25:03.713712 containerd[1594]: 2025-09-13 00:25:02.704 [INFO][4021] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" HandleID="k8s-pod-network.bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Workload="localhost-k8s-goldmane--54d579b49d--wq4t8-eth0" Sep 13 00:25:03.713712 containerd[1594]: 2025-09-13 00:25:02.705 [INFO][4021] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" HandleID="k8s-pod-network.bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Workload="localhost-k8s-goldmane--54d579b49d--wq4t8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a59a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-wq4t8", "timestamp":"2025-09-13 00:25:02.704825517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:25:03.713712 containerd[1594]: 2025-09-13 00:25:02.705 [INFO][4021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:03.713712 containerd[1594]: 2025-09-13 00:25:03.294 [INFO][4021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:03.713712 containerd[1594]: 2025-09-13 00:25:03.294 [INFO][4021] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:25:03.713712 containerd[1594]: 2025-09-13 00:25:03.302 [INFO][4021] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" host="localhost" Sep 13 00:25:03.713712 containerd[1594]: 2025-09-13 00:25:03.407 [INFO][4021] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:25:03.713712 containerd[1594]: 2025-09-13 00:25:03.423 [INFO][4021] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:25:03.713712 containerd[1594]: 2025-09-13 00:25:03.429 [INFO][4021] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:03.714150 containerd[1594]: 2025-09-13 00:25:03.433 [INFO][4021] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:03.714150 containerd[1594]: 2025-09-13 00:25:03.433 [INFO][4021] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" host="localhost" Sep 13 00:25:03.714150 containerd[1594]: 2025-09-13 00:25:03.435 [INFO][4021] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37 Sep 13 00:25:03.714150 containerd[1594]: 2025-09-13 00:25:03.446 [INFO][4021] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" host="localhost" Sep 13 00:25:03.714150 containerd[1594]: 2025-09-13 00:25:03.457 [INFO][4021] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" host="localhost" Sep 13 00:25:03.714150 containerd[1594]: 2025-09-13 00:25:03.458 [INFO][4021] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" host="localhost" Sep 13 00:25:03.714150 containerd[1594]: 2025-09-13 00:25:03.458 [INFO][4021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:03.714150 containerd[1594]: 2025-09-13 00:25:03.458 [INFO][4021] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" HandleID="k8s-pod-network.bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Workload="localhost-k8s-goldmane--54d579b49d--wq4t8-eth0" Sep 13 00:25:03.714482 containerd[1594]: 2025-09-13 00:25:03.469 [INFO][3984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Namespace="calico-system" Pod="goldmane-54d579b49d-wq4t8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wq4t8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--wq4t8-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"232536de-9922-44d0-aa94-90d23c7df90a", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-wq4t8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0e884b823a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:03.714482 containerd[1594]: 2025-09-13 00:25:03.472 [INFO][3984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Namespace="calico-system" Pod="goldmane-54d579b49d-wq4t8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wq4t8-eth0" Sep 13 00:25:03.714601 containerd[1594]: 2025-09-13 00:25:03.472 [INFO][3984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e884b823a5 ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Namespace="calico-system" Pod="goldmane-54d579b49d-wq4t8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wq4t8-eth0" Sep 13 00:25:03.714601 containerd[1594]: 2025-09-13 00:25:03.495 [INFO][3984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Namespace="calico-system" Pod="goldmane-54d579b49d-wq4t8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wq4t8-eth0" Sep 13 00:25:03.720129 containerd[1594]: 2025-09-13 00:25:03.501 [INFO][3984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Namespace="calico-system" Pod="goldmane-54d579b49d-wq4t8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wq4t8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--wq4t8-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"232536de-9922-44d0-aa94-90d23c7df90a", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37", Pod:"goldmane-54d579b49d-wq4t8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0e884b823a5", MAC:"0e:0c:8f:9e:f8:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:03.720253 containerd[1594]: 2025-09-13 00:25:03.697 [INFO][3984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" Namespace="calico-system" Pod="goldmane-54d579b49d-wq4t8" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wq4t8-eth0" Sep 13 00:25:03.780809 containerd[1594]: time="2025-09-13T00:25:03.780664133Z" level=info msg="connecting to shim 5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663" address="unix:///run/containerd/s/f6c11b85237116897ac280722d5bda214b490e08b6354341e68938cd0c4d5ec8" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:03.822967 systemd[1]: Started cri-containerd-5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663.scope - libcontainer container 5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663. Sep 13 00:25:03.837702 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:25:04.058110 systemd-networkd[1503]: vxlan.calico: Link UP Sep 13 00:25:04.058121 systemd-networkd[1503]: vxlan.calico: Gained carrier Sep 13 00:25:04.482549 containerd[1594]: time="2025-09-13T00:25:04.482504715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6cdf4f8f49-b6h44,Uid:aa9cc419-68f4-433f-aaf2-3ef2cf0d2915,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663\"" Sep 13 00:25:04.485058 containerd[1594]: time="2025-09-13T00:25:04.484987356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:25:04.544540 systemd-networkd[1503]: caliee24b71409c: Gained IPv6LL Sep 13 00:25:04.602392 containerd[1594]: time="2025-09-13T00:25:04.602284700Z" level=info msg="connecting to shim e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a" address="unix:///run/containerd/s/ab4b2175a94c8432d09b2a8fcd716e8bf295437e0576f55805a385d303d21cfe" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:04.612191 containerd[1594]: time="2025-09-13T00:25:04.612099034Z" level=info msg="connecting to shim bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37" address="unix:///run/containerd/s/1c80574a44241ac7b9927a873811a7aa1234598f03ca5ec4197ded09b42d065f" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:04.640131 systemd[1]: Started cri-containerd-e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a.scope - libcontainer container e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a. Sep 13 00:25:04.644496 systemd[1]: Started cri-containerd-bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37.scope - libcontainer container bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37. Sep 13 00:25:04.673982 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:25:04.681511 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:25:04.758905 containerd[1594]: time="2025-09-13T00:25:04.758366723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d44446994-d9vnz,Uid:a939d331-c795-43ac-a495-5744af638db4,Namespace:calico-system,Attempt:0,} returns sandbox id \"e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a\"" Sep 13 00:25:04.941834 containerd[1594]: time="2025-09-13T00:25:04.941767694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wq4t8,Uid:232536de-9922-44d0-aa94-90d23c7df90a,Namespace:calico-system,Attempt:0,} returns sandbox id \"bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37\"" Sep 13 00:25:04.999011 systemd-networkd[1503]: cali9ca69a5c4fb: Link UP Sep 13 00:25:04.999233 systemd-networkd[1503]: cali9ca69a5c4fb: Gained carrier Sep 13 00:25:05.247070 systemd-networkd[1503]: cali1180c996d00: Gained IPv6LL Sep 13 00:25:05.310907 systemd-networkd[1503]: cali0e884b823a5: Gained IPv6LL Sep 13 00:25:05.502981 systemd-networkd[1503]: vxlan.calico: Gained IPv6LL Sep 13 00:25:05.692828 containerd[1594]: 2025-09-13 00:25:04.512 [INFO][4285] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0 coredns-668d6bf9bc- kube-system aba01603-30d6-40a1-a7ff-d2401c8a52b4 878 0 2025-09-13 00:24:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-5b5tl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ca69a5c4fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b5tl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b5tl-" Sep 13 00:25:05.692828 containerd[1594]: 2025-09-13 00:25:04.512 [INFO][4285] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b5tl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0" Sep 13 00:25:05.692828 containerd[1594]: 2025-09-13 00:25:04.600 [INFO][4356] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" HandleID="k8s-pod-network.6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Workload="localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0" Sep 13 00:25:05.693107 containerd[1594]: 2025-09-13 00:25:04.603 [INFO][4356] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" HandleID="k8s-pod-network.6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Workload="localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-5b5tl", "timestamp":"2025-09-13 00:25:04.600844616 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:25:05.693107 containerd[1594]: 2025-09-13 00:25:04.604 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:05.693107 containerd[1594]: 2025-09-13 00:25:04.606 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:05.693107 containerd[1594]: 2025-09-13 00:25:04.606 [INFO][4356] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:25:05.693107 containerd[1594]: 2025-09-13 00:25:04.620 [INFO][4356] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" host="localhost" Sep 13 00:25:05.693107 containerd[1594]: 2025-09-13 00:25:04.630 [INFO][4356] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:25:05.693107 containerd[1594]: 2025-09-13 00:25:04.637 [INFO][4356] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:25:05.693107 containerd[1594]: 2025-09-13 00:25:04.640 [INFO][4356] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:05.693107 containerd[1594]: 2025-09-13 00:25:04.647 [INFO][4356] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:05.693107 containerd[1594]: 2025-09-13 00:25:04.647 [INFO][4356] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" host="localhost" Sep 13 00:25:05.693407 containerd[1594]: 2025-09-13 00:25:04.653 [INFO][4356] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556 Sep 13 00:25:05.693407 containerd[1594]: 2025-09-13 00:25:04.729 [INFO][4356] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" host="localhost" Sep 13 00:25:05.693407 containerd[1594]: 2025-09-13 00:25:04.989 [INFO][4356] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" host="localhost" Sep 13 00:25:05.693407 containerd[1594]: 2025-09-13 00:25:04.989 [INFO][4356] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" host="localhost" Sep 13 00:25:05.693407 containerd[1594]: 2025-09-13 00:25:04.989 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:05.693407 containerd[1594]: 2025-09-13 00:25:04.989 [INFO][4356] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" HandleID="k8s-pod-network.6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Workload="localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0" Sep 13 00:25:05.693562 containerd[1594]: 2025-09-13 00:25:04.993 [INFO][4285] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b5tl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aba01603-30d6-40a1-a7ff-d2401c8a52b4", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-5b5tl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ca69a5c4fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:05.693664 containerd[1594]: 2025-09-13 00:25:04.993 [INFO][4285] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b5tl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0" Sep 13 00:25:05.693664 containerd[1594]: 2025-09-13 00:25:04.993 [INFO][4285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ca69a5c4fb ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b5tl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0" Sep 13 00:25:05.693664 containerd[1594]: 2025-09-13 00:25:04.999 [INFO][4285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b5tl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0" Sep 13 00:25:05.693768 containerd[1594]: 2025-09-13 00:25:05.000 [INFO][4285] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b5tl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aba01603-30d6-40a1-a7ff-d2401c8a52b4", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556", Pod:"coredns-668d6bf9bc-5b5tl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ca69a5c4fb", MAC:"7e:e4:bc:0d:63:26", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:05.693768 containerd[1594]: 2025-09-13 00:25:05.689 [INFO][4285] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b5tl" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b5tl-eth0" Sep 13 00:25:05.788942 containerd[1594]: time="2025-09-13T00:25:05.787996984Z" level=info msg="connecting to shim 6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556" address="unix:///run/containerd/s/3930066a4a6faf13a1531349320df311106ce1cf273006b0f6606e4cfe730b3f" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:05.810454 systemd-networkd[1503]: calib35d25d3139: Link UP Sep 13 00:25:05.812171 systemd-networkd[1503]: calib35d25d3139: Gained carrier Sep 13 00:25:05.837407 systemd[1]: Started cri-containerd-6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556.scope - libcontainer container 6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556. Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:04.603 [INFO][4349] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0 calico-apiserver-6c686fdbf4- calico-apiserver 2be952e7-8835-4c16-a30d-a8e92050cd73 880 0 2025-09-13 00:24:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c686fdbf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c686fdbf4-mjkpg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib35d25d3139 [] [] }} ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-mjkpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:04.603 [INFO][4349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-mjkpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:04.665 [INFO][4445] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" HandleID="k8s-pod-network.ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Workload="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:04.665 [INFO][4445] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" HandleID="k8s-pod-network.ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Workload="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c686fdbf4-mjkpg", "timestamp":"2025-09-13 00:25:04.665064636 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:04.665 [INFO][4445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:04.989 [INFO][4445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:04.989 [INFO][4445] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.754 [INFO][4445] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" host="localhost" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.762 [INFO][4445] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.769 [INFO][4445] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.772 [INFO][4445] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.774 [INFO][4445] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.774 [INFO][4445] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" host="localhost" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.777 [INFO][4445] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.784 [INFO][4445] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" host="localhost" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.796 [INFO][4445] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" host="localhost" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.796 [INFO][4445] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" host="localhost" Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.796 [INFO][4445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:05.839205 containerd[1594]: 2025-09-13 00:25:05.796 [INFO][4445] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" HandleID="k8s-pod-network.ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Workload="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0" Sep 13 00:25:05.839746 containerd[1594]: 2025-09-13 00:25:05.801 [INFO][4349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-mjkpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0", GenerateName:"calico-apiserver-6c686fdbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2be952e7-8835-4c16-a30d-a8e92050cd73", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c686fdbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c686fdbf4-mjkpg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib35d25d3139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:05.839746 containerd[1594]: 2025-09-13 00:25:05.802 [INFO][4349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-mjkpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0" Sep 13 00:25:05.839746 containerd[1594]: 2025-09-13 00:25:05.802 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib35d25d3139 ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-mjkpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0" Sep 13 00:25:05.839746 containerd[1594]: 2025-09-13 00:25:05.812 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-mjkpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0" Sep 13 00:25:05.839746 containerd[1594]: 2025-09-13 00:25:05.813 [INFO][4349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-mjkpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0", GenerateName:"calico-apiserver-6c686fdbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"2be952e7-8835-4c16-a30d-a8e92050cd73", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c686fdbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c", Pod:"calico-apiserver-6c686fdbf4-mjkpg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib35d25d3139", MAC:"3e:95:81:d0:0a:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:05.839746 containerd[1594]: 2025-09-13 00:25:05.828 [INFO][4349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-mjkpg" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--mjkpg-eth0" Sep 13 00:25:05.861664 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:25:05.943335 containerd[1594]: time="2025-09-13T00:25:05.943265432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5b5tl,Uid:aba01603-30d6-40a1-a7ff-d2401c8a52b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556\"" Sep 13 00:25:05.948316 kubelet[2733]: E0913 00:25:05.947860 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:05.949684 systemd-networkd[1503]: calic872faff708: Link UP Sep 13 00:25:05.950769 systemd-networkd[1503]: calic872faff708: Gained carrier Sep 13 00:25:05.958897 containerd[1594]: time="2025-09-13T00:25:05.958048914Z" level=info msg="CreateContainer within sandbox \"6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:04.587 [INFO][4311] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0 calico-apiserver-6c686fdbf4- calico-apiserver 89c5ca25-5c32-4aa7-8832-b12816552076 876 0 2025-09-13 00:24:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c686fdbf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c686fdbf4-6mwrv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic872faff708 [] [] }} ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-6mwrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:04.587 [INFO][4311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-6mwrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:04.665 [INFO][4392] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" HandleID="k8s-pod-network.9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Workload="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:04.665 [INFO][4392] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" HandleID="k8s-pod-network.9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Workload="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1bf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c686fdbf4-6mwrv", "timestamp":"2025-09-13 00:25:04.665131963 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:04.665 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.796 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.796 [INFO][4392] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.855 [INFO][4392] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" host="localhost" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.864 [INFO][4392] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.871 [INFO][4392] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.876 [INFO][4392] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.880 [INFO][4392] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.880 [INFO][4392] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" host="localhost" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.882 [INFO][4392] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381 Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.890 [INFO][4392] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" host="localhost" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.910 [INFO][4392] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" host="localhost" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.911 [INFO][4392] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" host="localhost" Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.911 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:05.998214 containerd[1594]: 2025-09-13 00:25:05.912 [INFO][4392] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" HandleID="k8s-pod-network.9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Workload="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0" Sep 13 00:25:06.000091 containerd[1594]: 2025-09-13 00:25:05.932 [INFO][4311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-6mwrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0", GenerateName:"calico-apiserver-6c686fdbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"89c5ca25-5c32-4aa7-8832-b12816552076", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c686fdbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c686fdbf4-6mwrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic872faff708", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:06.000091 containerd[1594]: 2025-09-13 00:25:05.934 [INFO][4311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-6mwrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0" Sep 13 00:25:06.000091 containerd[1594]: 2025-09-13 00:25:05.934 [INFO][4311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic872faff708 ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-6mwrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0" Sep 13 00:25:06.000091 containerd[1594]: 2025-09-13 00:25:05.951 [INFO][4311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-6mwrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0" Sep 13 00:25:06.000091 containerd[1594]: 2025-09-13 00:25:05.952 [INFO][4311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-6mwrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0", GenerateName:"calico-apiserver-6c686fdbf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"89c5ca25-5c32-4aa7-8832-b12816552076", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c686fdbf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381", Pod:"calico-apiserver-6c686fdbf4-6mwrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic872faff708", MAC:"fe:0e:36:35:1a:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:06.000091 containerd[1594]: 2025-09-13 00:25:05.982 [INFO][4311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" Namespace="calico-apiserver" Pod="calico-apiserver-6c686fdbf4-6mwrv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c686fdbf4--6mwrv-eth0" Sep 13 00:25:06.012775 containerd[1594]: time="2025-09-13T00:25:06.012681127Z" level=info msg="Container 5b214c0e67660fa1a063eb7c61c53cf76ee3facea45299e0b4723baa7a5fe4f2: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:06.013431 containerd[1594]: time="2025-09-13T00:25:06.013388485Z" level=info msg="connecting to shim ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c" address="unix:///run/containerd/s/48913411079c8dbfe3fa1fd3df4fb566f4d2df156d976e2d83ce614a5fc5a48f" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:06.025808 containerd[1594]: time="2025-09-13T00:25:06.025722006Z" level=info msg="CreateContainer within sandbox \"6b65705b925d7549a2b0c51d31163800f157d670998db384a6e395cf6d674556\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b214c0e67660fa1a063eb7c61c53cf76ee3facea45299e0b4723baa7a5fe4f2\"" Sep 13 00:25:06.033314 containerd[1594]: time="2025-09-13T00:25:06.033249505Z" level=info msg="StartContainer for \"5b214c0e67660fa1a063eb7c61c53cf76ee3facea45299e0b4723baa7a5fe4f2\"" Sep 13 00:25:06.036279 systemd[1]: Started sshd@7-10.0.0.78:22-10.0.0.1:47190.service - OpenSSH per-connection server daemon (10.0.0.1:47190). Sep 13 00:25:06.043215 containerd[1594]: time="2025-09-13T00:25:06.042952068Z" level=info msg="connecting to shim 5b214c0e67660fa1a063eb7c61c53cf76ee3facea45299e0b4723baa7a5fe4f2" address="unix:///run/containerd/s/3930066a4a6faf13a1531349320df311106ce1cf273006b0f6606e4cfe730b3f" protocol=ttrpc version=3 Sep 13 00:25:06.105138 systemd[1]: Started cri-containerd-5b214c0e67660fa1a063eb7c61c53cf76ee3facea45299e0b4723baa7a5fe4f2.scope - libcontainer container 5b214c0e67660fa1a063eb7c61c53cf76ee3facea45299e0b4723baa7a5fe4f2. Sep 13 00:25:06.127145 containerd[1594]: time="2025-09-13T00:25:06.127049254Z" level=info msg="connecting to shim 9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381" address="unix:///run/containerd/s/e0d67e317334a228c884233ad39e3a8c0877ef9a3c932dc2254f22f5896cfdf6" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:06.129228 systemd-networkd[1503]: cali598b457abca: Link UP Sep 13 00:25:06.132284 sshd[4584]: Accepted publickey for core from 10.0.0.1 port 47190 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:06.142430 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:06.142609 systemd-networkd[1503]: cali598b457abca: Gained carrier Sep 13 00:25:06.147065 systemd[1]: Started cri-containerd-ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c.scope - libcontainer container ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c. Sep 13 00:25:06.158884 systemd-logind[1582]: New session 8 of user core. Sep 13 00:25:06.161144 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:04.601 [INFO][4309] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zh9k4-eth0 csi-node-driver- calico-system 011904af-1a84-4b2a-90be-5056b5973c41 728 0 2025-09-13 00:24:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zh9k4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali598b457abca [] [] }} ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Namespace="calico-system" Pod="csi-node-driver-zh9k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--zh9k4-" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:04.602 [INFO][4309] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Namespace="calico-system" Pod="csi-node-driver-zh9k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--zh9k4-eth0" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:04.671 [INFO][4441] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" HandleID="k8s-pod-network.33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Workload="localhost-k8s-csi--node--driver--zh9k4-eth0" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:04.671 [INFO][4441] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" HandleID="k8s-pod-network.33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Workload="localhost-k8s-csi--node--driver--zh9k4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123a40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zh9k4", "timestamp":"2025-09-13 00:25:04.671571231 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:04.672 [INFO][4441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:05.912 [INFO][4441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:05.912 [INFO][4441] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:05.987 [INFO][4441] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" host="localhost" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.010 [INFO][4441] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.031 [INFO][4441] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.040 [INFO][4441] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.047 [INFO][4441] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.047 [INFO][4441] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" host="localhost" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.052 [INFO][4441] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2 Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.062 [INFO][4441] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" host="localhost" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.076 [INFO][4441] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" host="localhost" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.077 [INFO][4441] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" host="localhost" Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.077 [INFO][4441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:06.170293 containerd[1594]: 2025-09-13 00:25:06.077 [INFO][4441] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" HandleID="k8s-pod-network.33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Workload="localhost-k8s-csi--node--driver--zh9k4-eth0" Sep 13 00:25:06.171025 containerd[1594]: 2025-09-13 00:25:06.113 [INFO][4309] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Namespace="calico-system" Pod="csi-node-driver-zh9k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--zh9k4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zh9k4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"011904af-1a84-4b2a-90be-5056b5973c41", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zh9k4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali598b457abca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:06.171025 containerd[1594]: 2025-09-13 00:25:06.113 [INFO][4309] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Namespace="calico-system" Pod="csi-node-driver-zh9k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--zh9k4-eth0" Sep 13 00:25:06.171025 containerd[1594]: 2025-09-13 00:25:06.113 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali598b457abca ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Namespace="calico-system" Pod="csi-node-driver-zh9k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--zh9k4-eth0" Sep 13 00:25:06.171025 containerd[1594]: 2025-09-13 00:25:06.142 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Namespace="calico-system" Pod="csi-node-driver-zh9k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--zh9k4-eth0" Sep 13 00:25:06.171025 containerd[1594]: 2025-09-13 00:25:06.142 [INFO][4309] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Namespace="calico-system" Pod="csi-node-driver-zh9k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--zh9k4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zh9k4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"011904af-1a84-4b2a-90be-5056b5973c41", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2", Pod:"csi-node-driver-zh9k4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali598b457abca", MAC:"a6:bc:5b:82:5c:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:06.171025 containerd[1594]: 2025-09-13 00:25:06.158 [INFO][4309] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" Namespace="calico-system" Pod="csi-node-driver-zh9k4" WorkloadEndpoint="localhost-k8s-csi--node--driver--zh9k4-eth0" Sep 13 00:25:06.181722 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:25:06.187504 systemd[1]: Started cri-containerd-9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381.scope - libcontainer container 9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381. Sep 13 00:25:06.210670 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:25:06.264536 containerd[1594]: time="2025-09-13T00:25:06.263875979Z" level=info msg="StartContainer for \"5b214c0e67660fa1a063eb7c61c53cf76ee3facea45299e0b4723baa7a5fe4f2\" returns successfully" Sep 13 00:25:06.271980 systemd-networkd[1503]: cali9ca69a5c4fb: Gained IPv6LL Sep 13 00:25:06.324448 containerd[1594]: time="2025-09-13T00:25:06.324315087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c686fdbf4-mjkpg,Uid:2be952e7-8835-4c16-a30d-a8e92050cd73,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c\"" Sep 13 00:25:06.451572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287397033.mount: Deactivated successfully. Sep 13 00:25:06.539451 sshd[4668]: Connection closed by 10.0.0.1 port 47190 Sep 13 00:25:06.539809 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:06.545112 systemd[1]: sshd@7-10.0.0.78:22-10.0.0.1:47190.service: Deactivated successfully. Sep 13 00:25:06.548123 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:25:06.549106 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:25:06.550505 systemd-logind[1582]: Removed session 8. Sep 13 00:25:06.674166 containerd[1594]: time="2025-09-13T00:25:06.674107604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c686fdbf4-6mwrv,Uid:89c5ca25-5c32-4aa7-8832-b12816552076,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381\"" Sep 13 00:25:06.769085 kubelet[2733]: E0913 00:25:06.768879 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:06.867819 kubelet[2733]: I0913 00:25:06.867523 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5b5tl" podStartSLOduration=49.867452758 podStartE2EDuration="49.867452758s" podCreationTimestamp="2025-09-13 00:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:25:06.867399738 +0000 UTC m=+55.339882633" watchObservedRunningTime="2025-09-13 00:25:06.867452758 +0000 UTC m=+55.339935643" Sep 13 00:25:06.902925 containerd[1594]: time="2025-09-13T00:25:06.902521013Z" level=info msg="connecting to shim 33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2" address="unix:///run/containerd/s/a941db825722fba0c761d2ad021cb56e241d3b2d38fabcc83736f1d86ea8424d" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:06.942169 systemd[1]: Started cri-containerd-33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2.scope - libcontainer container 33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2. Sep 13 00:25:06.962042 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:25:06.979604 containerd[1594]: time="2025-09-13T00:25:06.979549842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zh9k4,Uid:011904af-1a84-4b2a-90be-5056b5973c41,Namespace:calico-system,Attempt:0,} returns sandbox id \"33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2\"" Sep 13 00:25:07.039050 systemd-networkd[1503]: calic872faff708: Gained IPv6LL Sep 13 00:25:07.295145 systemd-networkd[1503]: calib35d25d3139: Gained IPv6LL Sep 13 00:25:07.771442 kubelet[2733]: E0913 00:25:07.771398 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:07.871030 systemd-networkd[1503]: cali598b457abca: Gained IPv6LL Sep 13 00:25:08.637957 containerd[1594]: time="2025-09-13T00:25:08.637881514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:08.638989 containerd[1594]: time="2025-09-13T00:25:08.638964977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:25:08.640259 containerd[1594]: time="2025-09-13T00:25:08.640209733Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:08.642338 containerd[1594]: time="2025-09-13T00:25:08.642304254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:08.642949 containerd[1594]: time="2025-09-13T00:25:08.642902878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.157883252s" Sep 13 00:25:08.642988 containerd[1594]: time="2025-09-13T00:25:08.642954595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:25:08.647709 containerd[1594]: time="2025-09-13T00:25:08.647217737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:25:08.654610 containerd[1594]: time="2025-09-13T00:25:08.654549868Z" level=info msg="CreateContainer within sandbox \"5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:25:08.663739 containerd[1594]: time="2025-09-13T00:25:08.663678662Z" level=info msg="Container ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:08.674174 containerd[1594]: time="2025-09-13T00:25:08.674129115Z" level=info msg="CreateContainer within sandbox \"5d244f4027a3ee2f262cd1095c37dff03bd7e0901d216eba66d868d37cfd5663\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d\"" Sep 13 00:25:08.674764 containerd[1594]: time="2025-09-13T00:25:08.674691130Z" level=info msg="StartContainer for \"ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d\"" Sep 13 00:25:08.675969 containerd[1594]: time="2025-09-13T00:25:08.675921009Z" level=info msg="connecting to shim ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d" address="unix:///run/containerd/s/f6c11b85237116897ac280722d5bda214b490e08b6354341e68938cd0c4d5ec8" protocol=ttrpc version=3 Sep 13 00:25:08.710974 systemd[1]: Started cri-containerd-ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d.scope - libcontainer container ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d. Sep 13 00:25:08.766253 containerd[1594]: time="2025-09-13T00:25:08.766197757Z" level=info msg="StartContainer for \"ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d\" returns successfully" Sep 13 00:25:08.777030 kubelet[2733]: E0913 00:25:08.776963 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:08.789056 kubelet[2733]: I0913 00:25:08.788971 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6cdf4f8f49-b6h44" podStartSLOduration=33.626399798 podStartE2EDuration="37.788941623s" podCreationTimestamp="2025-09-13 00:24:31 +0000 UTC" firstStartedPulling="2025-09-13 00:25:04.484534375 +0000 UTC m=+52.957017260" lastFinishedPulling="2025-09-13 00:25:08.6470762 +0000 UTC m=+57.119559085" observedRunningTime="2025-09-13 00:25:08.788514932 +0000 UTC m=+57.260997817" watchObservedRunningTime="2025-09-13 00:25:08.788941623 +0000 UTC m=+57.261424508" Sep 13 00:25:08.883230 containerd[1594]: time="2025-09-13T00:25:08.883150290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d\" id:\"7cd1ce71c989158a6e1d6eac91db34b0cd4df584b05107cd026b39e8275ad301\" pid:4832 exited_at:{seconds:1757723108 nanos:882680719}" Sep 13 00:25:10.916191 containerd[1594]: time="2025-09-13T00:25:10.916104383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:10.917459 containerd[1594]: time="2025-09-13T00:25:10.917390167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:25:10.919464 containerd[1594]: time="2025-09-13T00:25:10.919387415Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:10.922317 containerd[1594]: time="2025-09-13T00:25:10.922271346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:10.923060 containerd[1594]: time="2025-09-13T00:25:10.923017818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.275772981s" Sep 13 00:25:10.923117 containerd[1594]: time="2025-09-13T00:25:10.923062713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:25:10.924317 containerd[1594]: time="2025-09-13T00:25:10.924288873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:25:10.926466 containerd[1594]: time="2025-09-13T00:25:10.925762819Z" level=info msg="CreateContainer within sandbox \"e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:25:10.966423 containerd[1594]: time="2025-09-13T00:25:10.966353826Z" level=info msg="Container 7d3ca616d28da268f67522eec6308e8867097b02dbd545cecbb00b9a5a4b1d70: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:10.979073 containerd[1594]: time="2025-09-13T00:25:10.978492596Z" level=info msg="CreateContainer within sandbox \"e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7d3ca616d28da268f67522eec6308e8867097b02dbd545cecbb00b9a5a4b1d70\"" Sep 13 00:25:10.979479 containerd[1594]: time="2025-09-13T00:25:10.979436547Z" level=info msg="StartContainer for \"7d3ca616d28da268f67522eec6308e8867097b02dbd545cecbb00b9a5a4b1d70\"" Sep 13 00:25:10.980705 containerd[1594]: time="2025-09-13T00:25:10.980679180Z" level=info msg="connecting to shim 7d3ca616d28da268f67522eec6308e8867097b02dbd545cecbb00b9a5a4b1d70" address="unix:///run/containerd/s/ab4b2175a94c8432d09b2a8fcd716e8bf295437e0576f55805a385d303d21cfe" protocol=ttrpc version=3 Sep 13 00:25:11.003937 systemd[1]: Started cri-containerd-7d3ca616d28da268f67522eec6308e8867097b02dbd545cecbb00b9a5a4b1d70.scope - libcontainer container 7d3ca616d28da268f67522eec6308e8867097b02dbd545cecbb00b9a5a4b1d70. Sep 13 00:25:11.061360 containerd[1594]: time="2025-09-13T00:25:11.061293302Z" level=info msg="StartContainer for \"7d3ca616d28da268f67522eec6308e8867097b02dbd545cecbb00b9a5a4b1d70\" returns successfully" Sep 13 00:25:11.552173 systemd[1]: Started sshd@8-10.0.0.78:22-10.0.0.1:49476.service - OpenSSH per-connection server daemon (10.0.0.1:49476). Sep 13 00:25:11.606816 sshd[4893]: Accepted publickey for core from 10.0.0.1 port 49476 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:11.609865 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:11.615344 systemd-logind[1582]: New session 9 of user core. Sep 13 00:25:11.621960 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:25:11.747996 sshd[4898]: Connection closed by 10.0.0.1 port 49476 Sep 13 00:25:11.748326 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:11.752436 systemd[1]: sshd@8-10.0.0.78:22-10.0.0.1:49476.service: Deactivated successfully. Sep 13 00:25:11.754368 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:25:11.755171 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:25:11.756377 systemd-logind[1582]: Removed session 9. Sep 13 00:25:13.609186 kubelet[2733]: E0913 00:25:13.609135 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:13.610531 containerd[1594]: time="2025-09-13T00:25:13.610474339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh5km,Uid:c24bc14a-01c5-4568-b35c-bcb58b6575e4,Namespace:kube-system,Attempt:0,}" Sep 13 00:25:13.999900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230016526.mount: Deactivated successfully. Sep 13 00:25:14.222257 systemd-networkd[1503]: cali9b3ec670881: Link UP Sep 13 00:25:14.223402 systemd-networkd[1503]: cali9b3ec670881: Gained carrier Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.893 [INFO][4916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--qh5km-eth0 coredns-668d6bf9bc- kube-system c24bc14a-01c5-4568-b35c-bcb58b6575e4 870 0 2025-09-13 00:24:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-qh5km eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b3ec670881 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh5km" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh5km-" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.893 [INFO][4916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh5km" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh5km-eth0" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.937 [INFO][4931] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" HandleID="k8s-pod-network.a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Workload="localhost-k8s-coredns--668d6bf9bc--qh5km-eth0" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.938 [INFO][4931] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" HandleID="k8s-pod-network.a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Workload="localhost-k8s-coredns--668d6bf9bc--qh5km-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-qh5km", "timestamp":"2025-09-13 00:25:13.937848249 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.939 [INFO][4931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.940 [INFO][4931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.941 [INFO][4931] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.954 [INFO][4931] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" host="localhost" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.959 [INFO][4931] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.980 [INFO][4931] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.984 [INFO][4931] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.986 [INFO][4931] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.986 [INFO][4931] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" host="localhost" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:13.988 [INFO][4931] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445 Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:14.021 [INFO][4931] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" host="localhost" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:14.216 [INFO][4931] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" host="localhost" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:14.216 [INFO][4931] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" host="localhost" Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:14.216 [INFO][4931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:25:14.654323 containerd[1594]: 2025-09-13 00:25:14.216 [INFO][4931] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" HandleID="k8s-pod-network.a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Workload="localhost-k8s-coredns--668d6bf9bc--qh5km-eth0" Sep 13 00:25:14.682572 containerd[1594]: 2025-09-13 00:25:14.220 [INFO][4916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh5km" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh5km-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qh5km-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c24bc14a-01c5-4568-b35c-bcb58b6575e4", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-qh5km", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b3ec670881", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:14.682572 containerd[1594]: 2025-09-13 00:25:14.220 [INFO][4916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh5km" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh5km-eth0" Sep 13 00:25:14.682572 containerd[1594]: 2025-09-13 00:25:14.220 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b3ec670881 ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh5km" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh5km-eth0" Sep 13 00:25:14.682572 containerd[1594]: 2025-09-13 00:25:14.222 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh5km" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh5km-eth0" Sep 13 00:25:14.682572 containerd[1594]: 2025-09-13 00:25:14.223 [INFO][4916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh5km" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh5km-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--qh5km-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c24bc14a-01c5-4568-b35c-bcb58b6575e4", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 24, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445", Pod:"coredns-668d6bf9bc-qh5km", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b3ec670881", MAC:"ba:c0:93:29:85:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:25:14.682572 containerd[1594]: 2025-09-13 00:25:14.650 [INFO][4916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" Namespace="kube-system" Pod="coredns-668d6bf9bc-qh5km" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--qh5km-eth0" Sep 13 00:25:15.002648 containerd[1594]: time="2025-09-13T00:25:15.002483445Z" level=info msg="connecting to shim a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445" address="unix:///run/containerd/s/5c4a89848267213f478e1cdd64d3995cac4ac1828641aa185a593f2df9db74eb" namespace=k8s.io protocol=ttrpc version=3 Sep 13 00:25:15.085962 systemd[1]: Started cri-containerd-a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445.scope - libcontainer container a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445. Sep 13 00:25:15.101207 systemd-resolved[1408]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:25:15.136614 containerd[1594]: time="2025-09-13T00:25:15.136567750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qh5km,Uid:c24bc14a-01c5-4568-b35c-bcb58b6575e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445\"" Sep 13 00:25:15.137709 kubelet[2733]: E0913 00:25:15.137675 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:15.141118 containerd[1594]: time="2025-09-13T00:25:15.141089257Z" level=info msg="CreateContainer within sandbox \"a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:25:15.155404 containerd[1594]: time="2025-09-13T00:25:15.155351434Z" level=info msg="Container fefbd7021db645754bb9c553064dd0c91701f517f6f7c6a9734e23ea56719132: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:15.162733 containerd[1594]: time="2025-09-13T00:25:15.162696661Z" level=info msg="CreateContainer within sandbox \"a1fd44d4c6a4f60a957fb0def464940616a66b96ba8f46cb9772e3a142c2c445\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fefbd7021db645754bb9c553064dd0c91701f517f6f7c6a9734e23ea56719132\"" Sep 13 00:25:15.163292 containerd[1594]: time="2025-09-13T00:25:15.163255303Z" level=info msg="StartContainer for \"fefbd7021db645754bb9c553064dd0c91701f517f6f7c6a9734e23ea56719132\"" Sep 13 00:25:15.165337 containerd[1594]: time="2025-09-13T00:25:15.165217830Z" level=info msg="connecting to shim fefbd7021db645754bb9c553064dd0c91701f517f6f7c6a9734e23ea56719132" address="unix:///run/containerd/s/5c4a89848267213f478e1cdd64d3995cac4ac1828641aa185a593f2df9db74eb" protocol=ttrpc version=3 Sep 13 00:25:15.196043 systemd[1]: Started cri-containerd-fefbd7021db645754bb9c553064dd0c91701f517f6f7c6a9734e23ea56719132.scope - libcontainer container fefbd7021db645754bb9c553064dd0c91701f517f6f7c6a9734e23ea56719132. Sep 13 00:25:15.234717 containerd[1594]: time="2025-09-13T00:25:15.234650694Z" level=info msg="StartContainer for \"fefbd7021db645754bb9c553064dd0c91701f517f6f7c6a9734e23ea56719132\" returns successfully" Sep 13 00:25:15.430825 containerd[1594]: time="2025-09-13T00:25:15.430717215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:15.431903 containerd[1594]: time="2025-09-13T00:25:15.431815834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:25:15.436709 containerd[1594]: time="2025-09-13T00:25:15.436641044Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:15.439337 containerd[1594]: time="2025-09-13T00:25:15.439285495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:15.439871 containerd[1594]: time="2025-09-13T00:25:15.439830091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.515506944s" Sep 13 00:25:15.439916 containerd[1594]: time="2025-09-13T00:25:15.439871038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:25:15.441123 containerd[1594]: time="2025-09-13T00:25:15.441095204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:25:15.442405 containerd[1594]: time="2025-09-13T00:25:15.442372891Z" level=info msg="CreateContainer within sandbox \"bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:25:15.452003 containerd[1594]: time="2025-09-13T00:25:15.451961183Z" level=info msg="Container 5b9233f79dd2edb65fd27a4649e2fdd2ab098512367b0d6f9f57ba9985274bb9: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:15.460319 containerd[1594]: time="2025-09-13T00:25:15.460280715Z" level=info msg="CreateContainer within sandbox \"bb2f082b253bed92d576ec8f7d0bd47095c31aa2579f153d1578e1ee32051b37\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5b9233f79dd2edb65fd27a4649e2fdd2ab098512367b0d6f9f57ba9985274bb9\"" Sep 13 00:25:15.460814 containerd[1594]: time="2025-09-13T00:25:15.460731343Z" level=info msg="StartContainer for \"5b9233f79dd2edb65fd27a4649e2fdd2ab098512367b0d6f9f57ba9985274bb9\"" Sep 13 00:25:15.461758 containerd[1594]: time="2025-09-13T00:25:15.461717722Z" level=info msg="connecting to shim 5b9233f79dd2edb65fd27a4649e2fdd2ab098512367b0d6f9f57ba9985274bb9" address="unix:///run/containerd/s/1c80574a44241ac7b9927a873811a7aa1234598f03ca5ec4197ded09b42d065f" protocol=ttrpc version=3 Sep 13 00:25:15.488944 systemd[1]: Started cri-containerd-5b9233f79dd2edb65fd27a4649e2fdd2ab098512367b0d6f9f57ba9985274bb9.scope - libcontainer container 5b9233f79dd2edb65fd27a4649e2fdd2ab098512367b0d6f9f57ba9985274bb9. Sep 13 00:25:15.538350 containerd[1594]: time="2025-09-13T00:25:15.538305284Z" level=info msg="StartContainer for \"5b9233f79dd2edb65fd27a4649e2fdd2ab098512367b0d6f9f57ba9985274bb9\" returns successfully" Sep 13 00:25:15.742995 systemd-networkd[1503]: cali9b3ec670881: Gained IPv6LL Sep 13 00:25:15.793804 kubelet[2733]: E0913 00:25:15.793553 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:15.807552 kubelet[2733]: I0913 00:25:15.806660 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qh5km" podStartSLOduration=57.806641251 podStartE2EDuration="57.806641251s" podCreationTimestamp="2025-09-13 00:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:25:15.806107857 +0000 UTC m=+64.278590742" watchObservedRunningTime="2025-09-13 00:25:15.806641251 +0000 UTC m=+64.279124136" Sep 13 00:25:15.886260 containerd[1594]: time="2025-09-13T00:25:15.886213517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b9233f79dd2edb65fd27a4649e2fdd2ab098512367b0d6f9f57ba9985274bb9\" id:\"5454fb3fb82b671cae432ffd09702009d92fd5b052456ece2776b46219e3503d\" pid:5096 exited_at:{seconds:1757723115 nanos:885712632}" Sep 13 00:25:15.898600 kubelet[2733]: I0913 00:25:15.898507 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-wq4t8" podStartSLOduration=34.400401941 podStartE2EDuration="44.898486706s" podCreationTimestamp="2025-09-13 00:24:31 +0000 UTC" firstStartedPulling="2025-09-13 00:25:04.942856389 +0000 UTC m=+53.415339264" lastFinishedPulling="2025-09-13 00:25:15.440941144 +0000 UTC m=+63.913424029" observedRunningTime="2025-09-13 00:25:15.868116706 +0000 UTC m=+64.340599611" watchObservedRunningTime="2025-09-13 00:25:15.898486706 +0000 UTC m=+64.370969591" Sep 13 00:25:16.775090 systemd[1]: Started sshd@9-10.0.0.78:22-10.0.0.1:49486.service - OpenSSH per-connection server daemon (10.0.0.1:49486). Sep 13 00:25:16.798197 kubelet[2733]: E0913 00:25:16.798163 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:16.833226 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 49486 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:16.835007 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:16.839727 systemd-logind[1582]: New session 10 of user core. Sep 13 00:25:16.850972 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:25:17.033288 sshd[5114]: Connection closed by 10.0.0.1 port 49486 Sep 13 00:25:17.035016 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:17.040081 systemd[1]: sshd@9-10.0.0.78:22-10.0.0.1:49486.service: Deactivated successfully. Sep 13 00:25:17.042686 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:25:17.043617 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:25:17.045340 systemd-logind[1582]: Removed session 10. Sep 13 00:25:17.800341 kubelet[2733]: E0913 00:25:17.800299 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:20.202261 containerd[1594]: time="2025-09-13T00:25:20.202181653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:20.203080 containerd[1594]: time="2025-09-13T00:25:20.203026487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:25:20.204295 containerd[1594]: time="2025-09-13T00:25:20.204251779Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:20.206759 containerd[1594]: time="2025-09-13T00:25:20.206715356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:20.207400 containerd[1594]: time="2025-09-13T00:25:20.207365383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.766239502s" Sep 13 00:25:20.207400 containerd[1594]: time="2025-09-13T00:25:20.207393458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:25:20.212254 containerd[1594]: time="2025-09-13T00:25:20.212229467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:25:20.225811 containerd[1594]: time="2025-09-13T00:25:20.225760441Z" level=info msg="CreateContainer within sandbox \"ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:25:20.235171 containerd[1594]: time="2025-09-13T00:25:20.235131115Z" level=info msg="Container 249f34bdf80089335460f36aa3a40caec869eb33ebc041f99446a94e9e1cf66e: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:20.244836 containerd[1594]: time="2025-09-13T00:25:20.244763305Z" level=info msg="CreateContainer within sandbox \"ad9c8b68a24914c303e9b0ffeb81e88c07ddb4efb1d7710a19e81d83b1d5901c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"249f34bdf80089335460f36aa3a40caec869eb33ebc041f99446a94e9e1cf66e\"" Sep 13 00:25:20.245470 containerd[1594]: time="2025-09-13T00:25:20.245432530Z" level=info msg="StartContainer for \"249f34bdf80089335460f36aa3a40caec869eb33ebc041f99446a94e9e1cf66e\"" Sep 13 00:25:20.246513 containerd[1594]: time="2025-09-13T00:25:20.246482171Z" level=info msg="connecting to shim 249f34bdf80089335460f36aa3a40caec869eb33ebc041f99446a94e9e1cf66e" address="unix:///run/containerd/s/48913411079c8dbfe3fa1fd3df4fb566f4d2df156d976e2d83ce614a5fc5a48f" protocol=ttrpc version=3 Sep 13 00:25:20.268944 systemd[1]: Started cri-containerd-249f34bdf80089335460f36aa3a40caec869eb33ebc041f99446a94e9e1cf66e.scope - libcontainer container 249f34bdf80089335460f36aa3a40caec869eb33ebc041f99446a94e9e1cf66e. Sep 13 00:25:20.555639 containerd[1594]: time="2025-09-13T00:25:20.555499578Z" level=info msg="StartContainer for \"249f34bdf80089335460f36aa3a40caec869eb33ebc041f99446a94e9e1cf66e\" returns successfully" Sep 13 00:25:20.647502 containerd[1594]: time="2025-09-13T00:25:20.647444106Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:20.656329 containerd[1594]: time="2025-09-13T00:25:20.656241209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:25:20.658460 containerd[1594]: time="2025-09-13T00:25:20.658394707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 446.137576ms" Sep 13 00:25:20.658460 containerd[1594]: time="2025-09-13T00:25:20.658439233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:25:20.659671 containerd[1594]: time="2025-09-13T00:25:20.659643494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:25:20.661401 containerd[1594]: time="2025-09-13T00:25:20.661337682Z" level=info msg="CreateContainer within sandbox \"9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:25:20.685738 containerd[1594]: time="2025-09-13T00:25:20.685020752Z" level=info msg="Container 550218eee5d8b2510a3da6534f21a3c7a2cab8f255769a7ed86875ac7fb0cb2e: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:20.704178 containerd[1594]: time="2025-09-13T00:25:20.704097649Z" level=info msg="CreateContainer within sandbox \"9e7880b36bcf1eb6dc7500e22b9f206ad748c77977b3b958da7dedfe3ce8f381\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"550218eee5d8b2510a3da6534f21a3c7a2cab8f255769a7ed86875ac7fb0cb2e\"" Sep 13 00:25:20.704956 containerd[1594]: time="2025-09-13T00:25:20.704919068Z" level=info msg="StartContainer for \"550218eee5d8b2510a3da6534f21a3c7a2cab8f255769a7ed86875ac7fb0cb2e\"" Sep 13 00:25:20.706423 containerd[1594]: time="2025-09-13T00:25:20.706373513Z" level=info msg="connecting to shim 550218eee5d8b2510a3da6534f21a3c7a2cab8f255769a7ed86875ac7fb0cb2e" address="unix:///run/containerd/s/e0d67e317334a228c884233ad39e3a8c0877ef9a3c932dc2254f22f5896cfdf6" protocol=ttrpc version=3 Sep 13 00:25:20.736995 systemd[1]: Started cri-containerd-550218eee5d8b2510a3da6534f21a3c7a2cab8f255769a7ed86875ac7fb0cb2e.scope - libcontainer container 550218eee5d8b2510a3da6534f21a3c7a2cab8f255769a7ed86875ac7fb0cb2e. Sep 13 00:25:21.033162 containerd[1594]: time="2025-09-13T00:25:21.033093334Z" level=info msg="StartContainer for \"550218eee5d8b2510a3da6534f21a3c7a2cab8f255769a7ed86875ac7fb0cb2e\" returns successfully" Sep 13 00:25:21.108230 kubelet[2733]: I0913 00:25:21.107828 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c686fdbf4-mjkpg" podStartSLOduration=38.223011877 podStartE2EDuration="52.107803058s" podCreationTimestamp="2025-09-13 00:24:29 +0000 UTC" firstStartedPulling="2025-09-13 00:25:06.327183582 +0000 UTC m=+54.799666467" lastFinishedPulling="2025-09-13 00:25:20.211974763 +0000 UTC m=+68.684457648" observedRunningTime="2025-09-13 00:25:21.082642309 +0000 UTC m=+69.555125194" watchObservedRunningTime="2025-09-13 00:25:21.107803058 +0000 UTC m=+69.580285973" Sep 13 00:25:21.108230 kubelet[2733]: I0913 00:25:21.107967 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c686fdbf4-6mwrv" podStartSLOduration=38.124052285 podStartE2EDuration="52.107961985s" podCreationTimestamp="2025-09-13 00:24:29 +0000 UTC" firstStartedPulling="2025-09-13 00:25:06.675445265 +0000 UTC m=+55.147928150" lastFinishedPulling="2025-09-13 00:25:20.659354965 +0000 UTC m=+69.131837850" observedRunningTime="2025-09-13 00:25:21.103450341 +0000 UTC m=+69.575933226" watchObservedRunningTime="2025-09-13 00:25:21.107961985 +0000 UTC m=+69.580444890" Sep 13 00:25:22.049409 systemd[1]: Started sshd@10-10.0.0.78:22-10.0.0.1:55700.service - OpenSSH per-connection server daemon (10.0.0.1:55700). Sep 13 00:25:22.050811 kubelet[2733]: I0913 00:25:22.050760 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:25:22.119566 sshd[5216]: Accepted publickey for core from 10.0.0.1 port 55700 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:22.121724 sshd-session[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:22.128749 systemd-logind[1582]: New session 11 of user core. Sep 13 00:25:22.137081 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:25:22.344340 sshd[5218]: Connection closed by 10.0.0.1 port 55700 Sep 13 00:25:22.346292 sshd-session[5216]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:22.356133 systemd[1]: sshd@10-10.0.0.78:22-10.0.0.1:55700.service: Deactivated successfully. Sep 13 00:25:22.359617 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:25:22.361171 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:25:22.368109 systemd[1]: Started sshd@11-10.0.0.78:22-10.0.0.1:55708.service - OpenSSH per-connection server daemon (10.0.0.1:55708). Sep 13 00:25:22.369085 systemd-logind[1582]: Removed session 11. Sep 13 00:25:22.420407 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 55708 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:22.422716 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:22.428371 systemd-logind[1582]: New session 12 of user core. Sep 13 00:25:22.441116 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:25:22.670035 sshd[5237]: Connection closed by 10.0.0.1 port 55708 Sep 13 00:25:22.670291 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:22.684646 systemd[1]: sshd@11-10.0.0.78:22-10.0.0.1:55708.service: Deactivated successfully. Sep 13 00:25:22.691532 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:25:22.695319 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:25:22.699213 systemd-logind[1582]: Removed session 12. Sep 13 00:25:22.704090 systemd[1]: Started sshd@12-10.0.0.78:22-10.0.0.1:55722.service - OpenSSH per-connection server daemon (10.0.0.1:55722). Sep 13 00:25:22.757523 sshd[5252]: Accepted publickey for core from 10.0.0.1 port 55722 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:22.759548 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:22.765653 systemd-logind[1582]: New session 13 of user core. Sep 13 00:25:22.782017 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:25:22.907734 sshd[5254]: Connection closed by 10.0.0.1 port 55722 Sep 13 00:25:22.908168 sshd-session[5252]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:22.913486 systemd[1]: sshd@12-10.0.0.78:22-10.0.0.1:55722.service: Deactivated successfully. Sep 13 00:25:22.916208 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:25:22.917247 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:25:22.918905 systemd-logind[1582]: Removed session 13. Sep 13 00:25:24.369327 containerd[1594]: time="2025-09-13T00:25:24.369259606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:24.370315 containerd[1594]: time="2025-09-13T00:25:24.370273752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:25:24.385096 containerd[1594]: time="2025-09-13T00:25:24.385035967Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:24.428470 containerd[1594]: time="2025-09-13T00:25:24.428393159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:24.429253 containerd[1594]: time="2025-09-13T00:25:24.429193423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.769402986s" Sep 13 00:25:24.429253 containerd[1594]: time="2025-09-13T00:25:24.429245884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:25:24.430371 containerd[1594]: time="2025-09-13T00:25:24.430312802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:25:24.432988 containerd[1594]: time="2025-09-13T00:25:24.432949699Z" level=info msg="CreateContainer within sandbox \"33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:25:24.645111 containerd[1594]: time="2025-09-13T00:25:24.644963494Z" level=info msg="Container 064bc31e03348ab43b51333060a995a2f8c3ec8ec228c830276cf794c8279597: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:24.692575 containerd[1594]: time="2025-09-13T00:25:24.692507753Z" level=info msg="CreateContainer within sandbox \"33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"064bc31e03348ab43b51333060a995a2f8c3ec8ec228c830276cf794c8279597\"" Sep 13 00:25:24.693965 containerd[1594]: time="2025-09-13T00:25:24.693932912Z" level=info msg="StartContainer for \"064bc31e03348ab43b51333060a995a2f8c3ec8ec228c830276cf794c8279597\"" Sep 13 00:25:24.695734 containerd[1594]: time="2025-09-13T00:25:24.695690382Z" level=info msg="connecting to shim 064bc31e03348ab43b51333060a995a2f8c3ec8ec228c830276cf794c8279597" address="unix:///run/containerd/s/a941db825722fba0c761d2ad021cb56e241d3b2d38fabcc83736f1d86ea8424d" protocol=ttrpc version=3 Sep 13 00:25:24.718138 systemd[1]: Started cri-containerd-064bc31e03348ab43b51333060a995a2f8c3ec8ec228c830276cf794c8279597.scope - libcontainer container 064bc31e03348ab43b51333060a995a2f8c3ec8ec228c830276cf794c8279597. Sep 13 00:25:24.768440 containerd[1594]: time="2025-09-13T00:25:24.768394552Z" level=info msg="StartContainer for \"064bc31e03348ab43b51333060a995a2f8c3ec8ec228c830276cf794c8279597\" returns successfully" Sep 13 00:25:27.201638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632419476.mount: Deactivated successfully. Sep 13 00:25:27.225143 containerd[1594]: time="2025-09-13T00:25:27.225048922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:27.226206 containerd[1594]: time="2025-09-13T00:25:27.226144220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:25:27.227407 containerd[1594]: time="2025-09-13T00:25:27.227373256Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:27.229529 containerd[1594]: time="2025-09-13T00:25:27.229479931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:27.230084 containerd[1594]: time="2025-09-13T00:25:27.230024609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.799679846s" Sep 13 00:25:27.230144 containerd[1594]: time="2025-09-13T00:25:27.230080597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:25:27.231468 containerd[1594]: time="2025-09-13T00:25:27.231424694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:25:27.232911 containerd[1594]: time="2025-09-13T00:25:27.232860878Z" level=info msg="CreateContainer within sandbox \"e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:25:27.274535 containerd[1594]: time="2025-09-13T00:25:27.274475756Z" level=info msg="Container 2ea7b7968939abe682699fa3a23fb34e17c2c481180b277f232884401c6d473b: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:27.285055 containerd[1594]: time="2025-09-13T00:25:27.285008298Z" level=info msg="CreateContainer within sandbox \"e89d09d5669a9c63d58481a0f4005a6578382af54548a965d7d82098e595ad2a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2ea7b7968939abe682699fa3a23fb34e17c2c481180b277f232884401c6d473b\"" Sep 13 00:25:27.285644 containerd[1594]: time="2025-09-13T00:25:27.285606859Z" level=info msg="StartContainer for \"2ea7b7968939abe682699fa3a23fb34e17c2c481180b277f232884401c6d473b\"" Sep 13 00:25:27.286741 containerd[1594]: time="2025-09-13T00:25:27.286712698Z" level=info msg="connecting to shim 2ea7b7968939abe682699fa3a23fb34e17c2c481180b277f232884401c6d473b" address="unix:///run/containerd/s/ab4b2175a94c8432d09b2a8fcd716e8bf295437e0576f55805a385d303d21cfe" protocol=ttrpc version=3 Sep 13 00:25:27.328994 systemd[1]: Started cri-containerd-2ea7b7968939abe682699fa3a23fb34e17c2c481180b277f232884401c6d473b.scope - libcontainer container 2ea7b7968939abe682699fa3a23fb34e17c2c481180b277f232884401c6d473b. Sep 13 00:25:27.395916 containerd[1594]: time="2025-09-13T00:25:27.395861008Z" level=info msg="StartContainer for \"2ea7b7968939abe682699fa3a23fb34e17c2c481180b277f232884401c6d473b\" returns successfully" Sep 13 00:25:27.921331 systemd[1]: Started sshd@13-10.0.0.78:22-10.0.0.1:55730.service - OpenSSH per-connection server daemon (10.0.0.1:55730). Sep 13 00:25:28.001287 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 55730 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:28.004226 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:28.009973 systemd-logind[1582]: New session 14 of user core. Sep 13 00:25:28.018955 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:25:28.165956 kubelet[2733]: I0913 00:25:28.165869 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-d44446994-d9vnz" podStartSLOduration=3.694288867 podStartE2EDuration="26.16585121s" podCreationTimestamp="2025-09-13 00:25:02 +0000 UTC" firstStartedPulling="2025-09-13 00:25:04.759659721 +0000 UTC m=+53.232142606" lastFinishedPulling="2025-09-13 00:25:27.231222054 +0000 UTC m=+75.703704949" observedRunningTime="2025-09-13 00:25:28.163442066 +0000 UTC m=+76.635924971" watchObservedRunningTime="2025-09-13 00:25:28.16585121 +0000 UTC m=+76.638334095" Sep 13 00:25:28.207386 sshd[5350]: Connection closed by 10.0.0.1 port 55730 Sep 13 00:25:28.208061 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:28.212771 systemd[1]: sshd@13-10.0.0.78:22-10.0.0.1:55730.service: Deactivated successfully. Sep 13 00:25:28.215223 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:25:28.216204 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:25:28.217679 systemd-logind[1582]: Removed session 14. Sep 13 00:25:28.608831 kubelet[2733]: E0913 00:25:28.608775 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:30.283821 containerd[1594]: time="2025-09-13T00:25:30.283655897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:30.284589 containerd[1594]: time="2025-09-13T00:25:30.284491311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:25:30.285933 containerd[1594]: time="2025-09-13T00:25:30.285889526Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:30.288152 containerd[1594]: time="2025-09-13T00:25:30.288115192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:25:30.289174 containerd[1594]: time="2025-09-13T00:25:30.289024529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.057567603s" Sep 13 00:25:30.289174 containerd[1594]: time="2025-09-13T00:25:30.289088062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:25:30.296541 containerd[1594]: time="2025-09-13T00:25:30.296483387Z" level=info msg="CreateContainer within sandbox \"33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:25:30.308053 containerd[1594]: time="2025-09-13T00:25:30.307995119Z" level=info msg="Container 67b7f93034bc73e11c2037e7272aa2fc75eaff7ca1d4b35f0c01d769e17e104a: CDI devices from CRI Config.CDIDevices: []" Sep 13 00:25:30.321978 containerd[1594]: time="2025-09-13T00:25:30.321902715Z" level=info msg="CreateContainer within sandbox \"33151c8930944d76842eaf1bc53bace63743974209727ab482cc53f324e1c9a2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"67b7f93034bc73e11c2037e7272aa2fc75eaff7ca1d4b35f0c01d769e17e104a\"" Sep 13 00:25:30.322499 containerd[1594]: time="2025-09-13T00:25:30.322453473Z" level=info msg="StartContainer for \"67b7f93034bc73e11c2037e7272aa2fc75eaff7ca1d4b35f0c01d769e17e104a\"" Sep 13 00:25:30.323963 containerd[1594]: time="2025-09-13T00:25:30.323927315Z" level=info msg="connecting to shim 67b7f93034bc73e11c2037e7272aa2fc75eaff7ca1d4b35f0c01d769e17e104a" address="unix:///run/containerd/s/a941db825722fba0c761d2ad021cb56e241d3b2d38fabcc83736f1d86ea8424d" protocol=ttrpc version=3 Sep 13 00:25:30.356996 systemd[1]: Started cri-containerd-67b7f93034bc73e11c2037e7272aa2fc75eaff7ca1d4b35f0c01d769e17e104a.scope - libcontainer container 67b7f93034bc73e11c2037e7272aa2fc75eaff7ca1d4b35f0c01d769e17e104a. Sep 13 00:25:30.424153 containerd[1594]: time="2025-09-13T00:25:30.424095238Z" level=info msg="StartContainer for \"67b7f93034bc73e11c2037e7272aa2fc75eaff7ca1d4b35f0c01d769e17e104a\" returns successfully" Sep 13 00:25:30.693187 kubelet[2733]: I0913 00:25:30.693114 2733 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:25:30.694036 kubelet[2733]: I0913 00:25:30.693230 2733 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:25:31.115813 kubelet[2733]: I0913 00:25:31.115479 2733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zh9k4" podStartSLOduration=36.803564013 podStartE2EDuration="1m0.115461587s" podCreationTimestamp="2025-09-13 00:24:31 +0000 UTC" firstStartedPulling="2025-09-13 00:25:06.980755826 +0000 UTC m=+55.453238711" lastFinishedPulling="2025-09-13 00:25:30.2926534 +0000 UTC m=+78.765136285" observedRunningTime="2025-09-13 00:25:31.114991675 +0000 UTC m=+79.587474560" watchObservedRunningTime="2025-09-13 00:25:31.115461587 +0000 UTC m=+79.587944472" Sep 13 00:25:31.609760 kubelet[2733]: E0913 00:25:31.609719 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:31.816369 containerd[1594]: time="2025-09-13T00:25:31.816310742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"402657c57ad2d40d55213b44d0500c19c93d8b60c1182b2a1d2b4c3c4152f7ee\" id:\"2a4c7ee29073458b08d6a52a6615dfee9b89ef7156e2a44b46d6e9022412aca4\" pid:5417 exit_status:1 exited_at:{seconds:1757723131 nanos:815269153}" Sep 13 00:25:31.921968 containerd[1594]: time="2025-09-13T00:25:31.921839358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"402657c57ad2d40d55213b44d0500c19c93d8b60c1182b2a1d2b4c3c4152f7ee\" id:\"271789d41f16c4950828b4bc4280cbfa36a0809698deabc54f3424eceeec65bc\" pid:5441 exit_status:1 exited_at:{seconds:1757723131 nanos:921439460}" Sep 13 00:25:32.906122 containerd[1594]: time="2025-09-13T00:25:32.906057058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d\" id:\"c6324c7cc6243a15866e78cad20a66e8ee70c62a7fb3c881f9aa2295a9add67d\" pid:5467 exited_at:{seconds:1757723132 nanos:903459303}" Sep 13 00:25:33.226246 systemd[1]: Started sshd@14-10.0.0.78:22-10.0.0.1:49376.service - OpenSSH per-connection server daemon (10.0.0.1:49376). Sep 13 00:25:33.301916 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 49376 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:33.303449 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:33.307853 systemd-logind[1582]: New session 15 of user core. Sep 13 00:25:33.319004 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:25:33.631194 sshd[5480]: Connection closed by 10.0.0.1 port 49376 Sep 13 00:25:33.631568 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:33.636090 systemd[1]: sshd@14-10.0.0.78:22-10.0.0.1:49376.service: Deactivated successfully. Sep 13 00:25:33.638378 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:25:33.639281 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:25:33.641202 systemd-logind[1582]: Removed session 15. Sep 13 00:25:37.060849 kubelet[2733]: I0913 00:25:37.060521 2733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:25:38.650619 systemd[1]: Started sshd@15-10.0.0.78:22-10.0.0.1:49388.service - OpenSSH per-connection server daemon (10.0.0.1:49388). Sep 13 00:25:38.706056 sshd[5501]: Accepted publickey for core from 10.0.0.1 port 49388 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:38.707770 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:38.712779 systemd-logind[1582]: New session 16 of user core. Sep 13 00:25:38.719985 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:25:38.824550 containerd[1594]: time="2025-09-13T00:25:38.824490195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d\" id:\"6485d4c0da56c4604eeba1a6a231174a5b1984edace676324b0ed537fe65af39\" pid:5525 exited_at:{seconds:1757723138 nanos:824298979}" Sep 13 00:25:38.847517 sshd[5503]: Connection closed by 10.0.0.1 port 49388 Sep 13 00:25:38.847873 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:38.855325 systemd[1]: sshd@15-10.0.0.78:22-10.0.0.1:49388.service: Deactivated successfully. Sep 13 00:25:38.857645 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:25:38.858631 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:25:38.860603 systemd-logind[1582]: Removed session 16. Sep 13 00:25:41.609605 kubelet[2733]: E0913 00:25:41.609552 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:41.663895 containerd[1594]: time="2025-09-13T00:25:41.663836631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b9233f79dd2edb65fd27a4649e2fdd2ab098512367b0d6f9f57ba9985274bb9\" id:\"567ce0f623b734b70264d140c950157d004b91f788d4082b52f82c87dd7fd3ef\" pid:5551 exited_at:{seconds:1757723141 nanos:663599518}" Sep 13 00:25:42.609161 kubelet[2733]: E0913 00:25:42.609115 2733 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:25:43.864632 systemd[1]: Started sshd@16-10.0.0.78:22-10.0.0.1:52538.service - OpenSSH per-connection server daemon (10.0.0.1:52538). Sep 13 00:25:43.924923 sshd[5564]: Accepted publickey for core from 10.0.0.1 port 52538 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:43.926511 sshd-session[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:43.934429 systemd-logind[1582]: New session 17 of user core. Sep 13 00:25:43.944968 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:25:44.093478 sshd[5566]: Connection closed by 10.0.0.1 port 52538 Sep 13 00:25:44.094053 sshd-session[5564]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:44.104990 systemd[1]: sshd@16-10.0.0.78:22-10.0.0.1:52538.service: Deactivated successfully. Sep 13 00:25:44.107237 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:25:44.108434 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:25:44.112661 systemd[1]: Started sshd@17-10.0.0.78:22-10.0.0.1:52544.service - OpenSSH per-connection server daemon (10.0.0.1:52544). Sep 13 00:25:44.113470 systemd-logind[1582]: Removed session 17. Sep 13 00:25:44.166750 sshd[5580]: Accepted publickey for core from 10.0.0.1 port 52544 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:44.168816 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:44.173937 systemd-logind[1582]: New session 18 of user core. Sep 13 00:25:44.184100 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:25:44.620671 sshd[5582]: Connection closed by 10.0.0.1 port 52544 Sep 13 00:25:44.621354 sshd-session[5580]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:44.632408 systemd[1]: sshd@17-10.0.0.78:22-10.0.0.1:52544.service: Deactivated successfully. Sep 13 00:25:44.635256 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:25:44.636318 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:25:44.641335 systemd[1]: Started sshd@18-10.0.0.78:22-10.0.0.1:52558.service - OpenSSH per-connection server daemon (10.0.0.1:52558). Sep 13 00:25:44.642391 systemd-logind[1582]: Removed session 18. Sep 13 00:25:44.706963 sshd[5599]: Accepted publickey for core from 10.0.0.1 port 52558 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:44.708702 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:44.714047 systemd-logind[1582]: New session 19 of user core. Sep 13 00:25:44.727951 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:25:45.591867 sshd[5601]: Connection closed by 10.0.0.1 port 52558 Sep 13 00:25:45.592530 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:45.603723 systemd[1]: sshd@18-10.0.0.78:22-10.0.0.1:52558.service: Deactivated successfully. Sep 13 00:25:45.606004 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:25:45.607110 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:25:45.611857 systemd[1]: Started sshd@19-10.0.0.78:22-10.0.0.1:52560.service - OpenSSH per-connection server daemon (10.0.0.1:52560). Sep 13 00:25:45.612953 systemd-logind[1582]: Removed session 19. Sep 13 00:25:45.673556 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 52560 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:45.675396 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:45.680135 systemd-logind[1582]: New session 20 of user core. Sep 13 00:25:45.686929 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:25:45.936381 containerd[1594]: time="2025-09-13T00:25:45.936224473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b9233f79dd2edb65fd27a4649e2fdd2ab098512367b0d6f9f57ba9985274bb9\" id:\"8144fe264e7e73809ac22eff233ef7bdf4dcda662220097176d2e1aeee4fae55\" pid:5644 exited_at:{seconds:1757723145 nanos:935841843}" Sep 13 00:25:46.063607 sshd[5624]: Connection closed by 10.0.0.1 port 52560 Sep 13 00:25:46.063941 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:46.075831 systemd[1]: sshd@19-10.0.0.78:22-10.0.0.1:52560.service: Deactivated successfully. Sep 13 00:25:46.078675 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:25:46.079634 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:25:46.082453 systemd-logind[1582]: Removed session 20. Sep 13 00:25:46.084628 systemd[1]: Started sshd@20-10.0.0.78:22-10.0.0.1:52562.service - OpenSSH per-connection server daemon (10.0.0.1:52562). Sep 13 00:25:46.149188 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 52562 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:46.151141 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:46.156588 systemd-logind[1582]: New session 21 of user core. Sep 13 00:25:46.169976 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:25:46.287073 sshd[5663]: Connection closed by 10.0.0.1 port 52562 Sep 13 00:25:46.287353 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:46.292631 systemd[1]: sshd@20-10.0.0.78:22-10.0.0.1:52562.service: Deactivated successfully. Sep 13 00:25:46.294964 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:25:46.295796 systemd-logind[1582]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:25:46.297980 systemd-logind[1582]: Removed session 21. Sep 13 00:25:51.304668 systemd[1]: Started sshd@21-10.0.0.78:22-10.0.0.1:57724.service - OpenSSH per-connection server daemon (10.0.0.1:57724). Sep 13 00:25:51.368236 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 57724 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:51.369760 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:51.374207 systemd-logind[1582]: New session 22 of user core. Sep 13 00:25:51.381911 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:25:51.567237 sshd[5680]: Connection closed by 10.0.0.1 port 57724 Sep 13 00:25:51.567511 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:51.572992 systemd[1]: sshd@21-10.0.0.78:22-10.0.0.1:57724.service: Deactivated successfully. Sep 13 00:25:51.575726 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:25:51.576674 systemd-logind[1582]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:25:51.578670 systemd-logind[1582]: Removed session 22. Sep 13 00:25:56.584315 systemd[1]: Started sshd@22-10.0.0.78:22-10.0.0.1:57730.service - OpenSSH per-connection server daemon (10.0.0.1:57730). Sep 13 00:25:56.638520 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 57730 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:25:56.640174 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:25:56.645196 systemd-logind[1582]: New session 23 of user core. Sep 13 00:25:56.653953 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:25:56.795004 sshd[5700]: Connection closed by 10.0.0.1 port 57730 Sep 13 00:25:56.795496 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Sep 13 00:25:56.800319 systemd[1]: sshd@22-10.0.0.78:22-10.0.0.1:57730.service: Deactivated successfully. Sep 13 00:25:56.803131 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:25:56.804094 systemd-logind[1582]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:25:56.806273 systemd-logind[1582]: Removed session 23. Sep 13 00:26:01.808823 systemd[1]: Started sshd@23-10.0.0.78:22-10.0.0.1:47374.service - OpenSSH per-connection server daemon (10.0.0.1:47374). Sep 13 00:26:01.887243 sshd[5714]: Accepted publickey for core from 10.0.0.1 port 47374 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:26:01.889423 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:01.896219 systemd-logind[1582]: New session 24 of user core. Sep 13 00:26:01.905090 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:26:01.923246 containerd[1594]: time="2025-09-13T00:26:01.923167395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"402657c57ad2d40d55213b44d0500c19c93d8b60c1182b2a1d2b4c3c4152f7ee\" id:\"8a4136087160e0ea7b9982d30452aaaba863315fdb4e45e121fe985a3396e245\" pid:5727 exited_at:{seconds:1757723161 nanos:922825696}" Sep 13 00:26:02.102498 sshd[5740]: Connection closed by 10.0.0.1 port 47374 Sep 13 00:26:02.102863 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Sep 13 00:26:02.107050 systemd[1]: sshd@23-10.0.0.78:22-10.0.0.1:47374.service: Deactivated successfully. Sep 13 00:26:02.109152 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:26:02.110209 systemd-logind[1582]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:26:02.111483 systemd-logind[1582]: Removed session 24. Sep 13 00:26:07.117620 systemd[1]: Started sshd@24-10.0.0.78:22-10.0.0.1:47380.service - OpenSSH per-connection server daemon (10.0.0.1:47380). Sep 13 00:26:07.186820 sshd[5756]: Accepted publickey for core from 10.0.0.1 port 47380 ssh2: RSA SHA256:/8GM80NBEXcYojGDDtGcQvVXMgw0gDChkDh/EWkms34 Sep 13 00:26:07.188744 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:26:07.198166 systemd-logind[1582]: New session 25 of user core. Sep 13 00:26:07.202067 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:26:07.532116 sshd[5758]: Connection closed by 10.0.0.1 port 47380 Sep 13 00:26:07.532460 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Sep 13 00:26:07.538069 systemd-logind[1582]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:26:07.541569 systemd[1]: sshd@24-10.0.0.78:22-10.0.0.1:47380.service: Deactivated successfully. Sep 13 00:26:07.545514 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:26:07.550780 systemd-logind[1582]: Removed session 25. Sep 13 00:26:08.819862 containerd[1594]: time="2025-09-13T00:26:08.819811188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ced5a06fdc3d04e048fcc88c6316396981ce8af64159be806439646d6faa744d\" id:\"d3c19f9e7c1bb671e8710e22d7f7521ea9772600443df6aefe8baf6297856b7a\" pid:5783 exited_at:{seconds:1757723168 nanos:819522862}"