Sep 9 00:31:13.513351 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:41:17 -00 2025 Sep 9 00:31:13.513379 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:31:13.513392 kernel: BIOS-provided physical RAM map: Sep 9 00:31:13.513399 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:31:13.513463 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:31:13.513471 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:31:13.513480 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:31:13.513488 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:31:13.513496 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 9 00:31:13.513503 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 9 00:31:13.513515 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 9 00:31:13.513522 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 9 00:31:13.513530 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 9 00:31:13.513538 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 9 00:31:13.513548 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 9 00:31:13.513557 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:31:13.513569 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 9 00:31:13.513577 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 9 00:31:13.513586 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:31:13.513594 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 00:31:13.513603 kernel: NX (Execute Disable) protection: active Sep 9 00:31:13.513611 kernel: APIC: Static calls initialized Sep 9 00:31:13.513620 kernel: efi: EFI v2.7 by EDK II Sep 9 00:31:13.513628 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 9 00:31:13.513637 kernel: SMBIOS 2.8 present. Sep 9 00:31:13.513646 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 9 00:31:13.513654 kernel: Hypervisor detected: KVM Sep 9 00:31:13.513665 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:31:13.513674 kernel: kvm-clock: using sched offset of 8873570552 cycles Sep 9 00:31:13.513683 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:31:13.513692 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:31:13.513701 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:31:13.513710 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:31:13.513719 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 9 00:31:13.513728 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 00:31:13.513737 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:31:13.513748 kernel: Using GB pages for direct mapping Sep 9 00:31:13.513757 kernel: Secure boot disabled Sep 9 00:31:13.513766 kernel: ACPI: Early table checksum verification disabled Sep 9 00:31:13.513775 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 00:31:13.513788 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:31:13.513797 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:13.513806 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:13.513819 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 00:31:13.513828 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:13.513837 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:13.513846 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:13.513855 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:31:13.513864 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 00:31:13.513873 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 00:31:13.513885 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 00:31:13.513894 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 00:31:13.513902 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 00:31:13.513911 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 00:31:13.513920 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 00:31:13.513929 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 00:31:13.513939 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 00:31:13.513955 kernel: No NUMA configuration found Sep 9 00:31:13.513966 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 9 00:31:13.513980 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 9 00:31:13.513989 kernel: Zone ranges: Sep 9 00:31:13.513998 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:31:13.514007 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 9 00:31:13.514016 kernel: Normal empty Sep 9 00:31:13.514026 kernel: Movable zone start for each node Sep 9 00:31:13.514047 kernel: Early memory node ranges Sep 9 00:31:13.514067 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 00:31:13.514077 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 00:31:13.514104 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 00:31:13.514131 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 9 00:31:13.514141 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 9 00:31:13.514150 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 9 00:31:13.514159 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 9 00:31:13.514168 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:31:13.514178 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 00:31:13.514187 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 00:31:13.514201 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:31:13.514223 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 9 00:31:13.514237 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 9 00:31:13.514249 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 9 00:31:13.514258 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:31:13.514268 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:31:13.514277 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:31:13.514286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:31:13.514295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:31:13.514305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:31:13.514314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:31:13.514326 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:31:13.514335 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:31:13.514344 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:31:13.514354 kernel: TSC deadline timer available Sep 9 00:31:13.514363 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 9 00:31:13.514372 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:31:13.514381 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:31:13.514390 kernel: kvm-guest: setup PV sched yield Sep 9 00:31:13.514399 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 9 00:31:13.514425 kernel: Booting paravirtualized kernel on KVM Sep 9 00:31:13.514434 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:31:13.514444 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:31:13.514453 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 9 00:31:13.514462 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 9 00:31:13.514471 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:31:13.514480 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:31:13.514489 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:31:13.514499 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:31:13.514513 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:31:13.514522 kernel: random: crng init done Sep 9 00:31:13.514531 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:31:13.514540 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:31:13.514549 kernel: Fallback order for Node 0: 0 Sep 9 00:31:13.514558 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 9 00:31:13.514567 kernel: Policy zone: DMA32 Sep 9 00:31:13.514576 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:31:13.514589 kernel: Memory: 2400596K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42880K init, 2316K bss, 166144K reserved, 0K cma-reserved) Sep 9 00:31:13.514598 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:31:13.514607 kernel: ftrace: allocating 37969 entries in 149 pages Sep 9 00:31:13.514616 kernel: ftrace: allocated 149 pages with 4 groups Sep 9 00:31:13.514626 kernel: Dynamic Preempt: voluntary Sep 9 00:31:13.514644 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:31:13.514657 kernel: rcu: RCU event tracing is enabled. Sep 9 00:31:13.514667 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:31:13.514677 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:31:13.514686 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:31:13.514696 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:31:13.514705 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:31:13.514717 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:31:13.514727 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:31:13.514737 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:31:13.514747 kernel: Console: colour dummy device 80x25 Sep 9 00:31:13.514759 kernel: printk: console [ttyS0] enabled Sep 9 00:31:13.514769 kernel: ACPI: Core revision 20230628 Sep 9 00:31:13.514778 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:31:13.514788 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:31:13.514797 kernel: x2apic enabled Sep 9 00:31:13.514807 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:31:13.514816 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:31:13.514826 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:31:13.514835 kernel: kvm-guest: setup PV IPIs Sep 9 00:31:13.514845 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:31:13.514857 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 9 00:31:13.514866 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:31:13.514876 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:31:13.514885 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:31:13.514895 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:31:13.514904 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:31:13.514914 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:31:13.514923 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:31:13.514936 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:31:13.514945 kernel: active return thunk: retbleed_return_thunk Sep 9 00:31:13.514954 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:31:13.514964 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:31:13.514974 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:31:13.514983 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:31:13.514994 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:31:13.515003 kernel: active return thunk: srso_return_thunk Sep 9 00:31:13.515013 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:31:13.515025 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:31:13.515034 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:31:13.515044 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:31:13.515053 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:31:13.515063 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:31:13.515072 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:31:13.515082 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:31:13.515091 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:31:13.515101 kernel: landlock: Up and running. Sep 9 00:31:13.515113 kernel: SELinux: Initializing. Sep 9 00:31:13.515137 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:31:13.515147 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:31:13.515156 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:31:13.515166 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:31:13.515176 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:31:13.515185 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:31:13.515195 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:31:13.515207 kernel: ... version: 0 Sep 9 00:31:13.515217 kernel: ... bit width: 48 Sep 9 00:31:13.515226 kernel: ... generic registers: 6 Sep 9 00:31:13.515236 kernel: ... value mask: 0000ffffffffffff Sep 9 00:31:13.515245 kernel: ... max period: 00007fffffffffff Sep 9 00:31:13.515254 kernel: ... fixed-purpose events: 0 Sep 9 00:31:13.515264 kernel: ... event mask: 000000000000003f Sep 9 00:31:13.515273 kernel: signal: max sigframe size: 1776 Sep 9 00:31:13.515283 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:31:13.515293 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:31:13.515305 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:31:13.515314 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:31:13.515324 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:31:13.515333 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:31:13.515343 kernel: smpboot: Max logical packages: 1 Sep 9 00:31:13.515352 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:31:13.515361 kernel: devtmpfs: initialized Sep 9 00:31:13.515371 kernel: x86/mm: Memory block size: 128MB Sep 9 00:31:13.515380 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 00:31:13.515398 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 00:31:13.515430 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 9 00:31:13.515443 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 00:31:13.515452 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 00:31:13.515463 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:31:13.515473 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:31:13.515483 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:31:13.515493 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:31:13.515508 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:31:13.515527 kernel: audit: type=2000 audit(1757377870.857:1): state=initialized audit_enabled=0 res=1 Sep 9 00:31:13.515538 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:31:13.515548 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:31:13.515559 kernel: cpuidle: using governor menu Sep 9 00:31:13.515570 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:31:13.515580 kernel: dca service started, version 1.12.1 Sep 9 00:31:13.515592 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 9 00:31:13.515603 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 00:31:13.515617 kernel: PCI: Using configuration type 1 for base access Sep 9 00:31:13.515627 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:31:13.515638 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:31:13.515649 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:31:13.515660 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:31:13.515670 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:31:13.515681 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:31:13.515692 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:31:13.515703 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:31:13.515719 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:31:13.515732 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 9 00:31:13.515748 kernel: ACPI: Interpreter enabled Sep 9 00:31:13.515759 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:31:13.515768 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:31:13.515778 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:31:13.515787 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:31:13.515797 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:31:13.515806 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:31:13.516075 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:31:13.516260 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:31:13.516473 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:31:13.516493 kernel: PCI host bridge to bus 0000:00 Sep 9 00:31:13.516736 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:31:13.516938 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:31:13.517112 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:31:13.517303 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 9 00:31:13.517475 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:31:13.517608 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 9 00:31:13.517739 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:31:13.517938 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 9 00:31:13.518102 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 9 00:31:13.518265 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 9 00:31:13.518439 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 9 00:31:13.518592 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 9 00:31:13.518734 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 9 00:31:13.518877 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:31:13.519043 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:31:13.519232 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 9 00:31:13.519444 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 9 00:31:13.519603 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 9 00:31:13.519768 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 9 00:31:13.519913 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 9 00:31:13.520055 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 9 00:31:13.520236 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 9 00:31:13.520458 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 9 00:31:13.520617 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 9 00:31:13.520768 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 9 00:31:13.520949 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 9 00:31:13.521151 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 9 00:31:13.521374 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 9 00:31:13.521730 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:31:13.521927 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 9 00:31:13.522085 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 9 00:31:13.522242 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 9 00:31:13.522426 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 9 00:31:13.522573 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 9 00:31:13.522586 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:31:13.522597 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:31:13.522607 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:31:13.522622 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:31:13.522631 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:31:13.522641 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:31:13.522651 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:31:13.522660 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:31:13.522670 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:31:13.522680 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:31:13.522690 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:31:13.522699 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:31:13.522712 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:31:13.522722 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:31:13.522732 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:31:13.522742 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:31:13.522752 kernel: iommu: Default domain type: Translated Sep 9 00:31:13.522762 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:31:13.522771 kernel: efivars: Registered efivars operations Sep 9 00:31:13.522781 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:31:13.522790 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:31:13.522803 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 00:31:13.522813 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 9 00:31:13.522823 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 9 00:31:13.522840 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 9 00:31:13.523005 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:31:13.523165 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:31:13.523310 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:31:13.523323 kernel: vgaarb: loaded Sep 9 00:31:13.523338 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:31:13.523348 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:31:13.523358 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:31:13.523368 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:31:13.523379 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:31:13.523389 kernel: pnp: PnP ACPI init Sep 9 00:31:13.523579 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 00:31:13.523594 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:31:13.523605 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:31:13.523620 kernel: NET: Registered PF_INET protocol family Sep 9 00:31:13.523630 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:31:13.523640 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:31:13.523650 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:31:13.523660 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:31:13.523670 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:31:13.523680 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:31:13.523690 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:31:13.523703 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:31:13.523713 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:31:13.523722 kernel: NET: Registered PF_XDP protocol family Sep 9 00:31:13.523869 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 9 00:31:13.524011 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 9 00:31:13.524177 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:31:13.524347 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:31:13.524523 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:31:13.524677 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 9 00:31:13.524833 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 00:31:13.524984 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 9 00:31:13.525000 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:31:13.525012 kernel: Initialise system trusted keyrings Sep 9 00:31:13.525024 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:31:13.525035 kernel: Key type asymmetric registered Sep 9 00:31:13.525045 kernel: Asymmetric key parser 'x509' registered Sep 9 00:31:13.525056 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 9 00:31:13.525072 kernel: io scheduler mq-deadline registered Sep 9 00:31:13.525083 kernel: io scheduler kyber registered Sep 9 00:31:13.525094 kernel: io scheduler bfq registered Sep 9 00:31:13.525105 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:31:13.525116 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:31:13.525138 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:31:13.525148 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:31:13.525158 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:31:13.525168 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:31:13.525181 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:31:13.525191 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:31:13.525201 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:31:13.525365 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:31:13.525519 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:31:13.525733 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:31:12 UTC (1757377872) Sep 9 00:31:13.525899 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 9 00:31:13.525913 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:31:13.525930 kernel: efifb: probing for efifb Sep 9 00:31:13.525941 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 9 00:31:13.525951 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 9 00:31:13.525962 kernel: efifb: scrolling: redraw Sep 9 00:31:13.525972 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 9 00:31:13.525983 kernel: Console: switching to colour frame buffer device 100x37 Sep 9 00:31:13.525993 kernel: fb0: EFI VGA frame buffer device Sep 9 00:31:13.526024 kernel: pstore: Using crash dump compression: deflate Sep 9 00:31:13.526037 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:31:13.526051 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 00:31:13.526062 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:31:13.526072 kernel: Segment Routing with IPv6 Sep 9 00:31:13.526082 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:31:13.526093 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:31:13.526103 kernel: Key type dns_resolver registered Sep 9 00:31:13.526114 kernel: IPI shorthand broadcast: enabled Sep 9 00:31:13.526138 kernel: sched_clock: Marking stable (2003004562, 163700091)->(2345145352, -178440699) Sep 9 00:31:13.526149 kernel: registered taskstats version 1 Sep 9 00:31:13.526163 kernel: Loading compiled-in X.509 certificates Sep 9 00:31:13.526175 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: cc5240ef94b546331b2896cdc739274c03278c51' Sep 9 00:31:13.526185 kernel: Key type .fscrypt registered Sep 9 00:31:13.526196 kernel: Key type fscrypt-provisioning registered Sep 9 00:31:13.526207 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:31:13.526217 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:31:13.526228 kernel: ima: No architecture policies found Sep 9 00:31:13.526239 kernel: clk: Disabling unused clocks Sep 9 00:31:13.526249 kernel: Freeing unused kernel image (initmem) memory: 42880K Sep 9 00:31:13.526264 kernel: Write protecting the kernel read-only data: 36864k Sep 9 00:31:13.526275 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 9 00:31:13.526286 kernel: Run /init as init process Sep 9 00:31:13.526299 kernel: with arguments: Sep 9 00:31:13.526311 kernel: /init Sep 9 00:31:13.526321 kernel: with environment: Sep 9 00:31:13.526332 kernel: HOME=/ Sep 9 00:31:13.526343 kernel: TERM=linux Sep 9 00:31:13.526354 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:31:13.526371 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:31:13.526386 systemd[1]: Detected virtualization kvm. Sep 9 00:31:13.526397 systemd[1]: Detected architecture x86-64. Sep 9 00:31:13.526436 systemd[1]: Running in initrd. Sep 9 00:31:13.526456 systemd[1]: No hostname configured, using default hostname. Sep 9 00:31:13.526468 systemd[1]: Hostname set to . Sep 9 00:31:13.526480 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:31:13.526491 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:31:13.526502 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:31:13.526513 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:31:13.526525 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:31:13.526536 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:31:13.526550 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:31:13.526562 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:31:13.526575 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:31:13.526586 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:31:13.526597 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:31:13.526608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:31:13.526622 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:31:13.526632 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:31:13.526643 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:31:13.526654 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:31:13.526665 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:31:13.526676 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:31:13.526687 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:31:13.526698 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:31:13.526709 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:31:13.526722 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:31:13.526733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:31:13.526744 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:31:13.526756 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:31:13.526767 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:31:13.526777 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:31:13.526788 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:31:13.526799 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:31:13.526810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:31:13.526824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:31:13.526835 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:31:13.526846 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:31:13.526889 systemd-journald[194]: Collecting audit messages is disabled. Sep 9 00:31:13.526918 systemd-journald[194]: Journal started Sep 9 00:31:13.526944 systemd-journald[194]: Runtime Journal (/run/log/journal/aa19e5253e614273a78b09d46575e853) is 6.0M, max 48.3M, 42.2M free. Sep 9 00:31:13.529466 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:31:13.538203 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:31:13.551334 systemd-modules-load[195]: Inserted module 'overlay' Sep 9 00:31:13.554269 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:31:13.565898 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:31:13.568263 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:31:13.570784 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:31:13.578754 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:13.587898 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:31:13.609761 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:31:13.622451 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:31:13.639854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:31:13.648948 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:31:13.661687 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:31:13.673340 kernel: Bridge firewalling registered Sep 9 00:31:13.673354 systemd-modules-load[195]: Inserted module 'br_netfilter' Sep 9 00:31:13.675859 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:31:13.684647 dracut-cmdline[224]: dracut-dracut-053 Sep 9 00:31:13.685463 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:31:13.692845 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:31:13.701738 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:31:13.723710 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:31:13.784763 systemd-resolved[249]: Positive Trust Anchors: Sep 9 00:31:13.784788 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:31:13.784829 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:31:13.788196 systemd-resolved[249]: Defaulting to hostname 'linux'. Sep 9 00:31:13.795276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:31:13.835781 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:31:13.891462 kernel: SCSI subsystem initialized Sep 9 00:31:13.907002 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:31:13.940463 kernel: iscsi: registered transport (tcp) Sep 9 00:31:13.984580 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:31:13.984684 kernel: QLogic iSCSI HBA Driver Sep 9 00:31:14.103462 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:31:14.120762 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:31:14.176324 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:31:14.176445 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:31:14.176467 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:31:14.275708 kernel: raid6: avx2x4 gen() 20477 MB/s Sep 9 00:31:14.295147 kernel: raid6: avx2x2 gen() 19920 MB/s Sep 9 00:31:14.311038 kernel: raid6: avx2x1 gen() 16598 MB/s Sep 9 00:31:14.311144 kernel: raid6: using algorithm avx2x4 gen() 20477 MB/s Sep 9 00:31:14.337480 kernel: raid6: .... xor() 5365 MB/s, rmw enabled Sep 9 00:31:14.337576 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:31:14.372345 kernel: xor: automatically using best checksumming function avx Sep 9 00:31:14.727271 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:31:14.784932 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:31:14.797711 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:31:14.831013 systemd-udevd[416]: Using default interface naming scheme 'v255'. Sep 9 00:31:14.840866 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:31:14.860862 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:31:14.894497 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Sep 9 00:31:15.001167 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:31:15.015945 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:31:15.144654 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:31:15.172300 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:31:15.218388 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:31:15.228774 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:31:15.245816 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:31:15.249589 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:31:15.325188 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:31:15.338332 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:31:15.338626 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:31:15.346055 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:31:15.347650 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:31:15.348552 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:15.353698 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:31:15.361469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:31:15.366301 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:31:15.378459 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:31:15.378755 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:31:15.385443 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:31:15.385910 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:31:15.390978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:15.408328 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:31:15.408438 kernel: GPT:9289727 != 19775487 Sep 9 00:31:15.408459 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:31:15.408475 kernel: GPT:9289727 != 19775487 Sep 9 00:31:15.408507 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:31:15.408522 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:31:15.404973 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:31:15.440437 kernel: libata version 3.00 loaded. Sep 9 00:31:15.449588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:15.544086 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:31:15.602439 kernel: AVX2 version of gcm_enc/dec engaged. Sep 9 00:31:15.602508 kernel: BTRFS: device fsid 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (478) Sep 9 00:31:15.608378 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Sep 9 00:31:15.602778 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:31:15.624117 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:31:15.676461 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:31:15.692823 kernel: AES CTR mode by8 optimization enabled Sep 9 00:31:15.702580 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:31:15.771937 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:31:15.790231 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:31:15.818311 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:31:15.818727 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:31:15.818747 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 9 00:31:15.822434 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:31:15.832199 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:31:15.832503 kernel: scsi host0: ahci Sep 9 00:31:15.844147 kernel: scsi host1: ahci Sep 9 00:31:15.864752 kernel: scsi host2: ahci Sep 9 00:31:15.865900 kernel: scsi host3: ahci Sep 9 00:31:15.866082 kernel: scsi host4: ahci Sep 9 00:31:15.875654 kernel: scsi host5: ahci Sep 9 00:31:15.876051 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 9 00:31:15.876081 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 9 00:31:15.876096 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 9 00:31:15.876128 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 9 00:31:15.876143 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 9 00:31:15.876156 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 9 00:31:15.997314 disk-uuid[550]: Primary Header is updated. Sep 9 00:31:15.997314 disk-uuid[550]: Secondary Entries is updated. Sep 9 00:31:15.997314 disk-uuid[550]: Secondary Header is updated. Sep 9 00:31:16.024145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:31:16.033167 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:31:16.190868 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:31:16.190967 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:31:16.192323 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:31:16.192369 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:31:16.196224 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:31:16.196320 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:31:16.196335 kernel: ata3.00: applying bridge limits Sep 9 00:31:16.199313 kernel: ata3.00: configured for UDMA/100 Sep 9 00:31:16.200699 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:31:16.205105 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:31:16.301700 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:31:16.302148 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:31:16.317119 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:31:17.047477 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:31:17.051118 disk-uuid[582]: The operation has completed successfully. Sep 9 00:31:17.151868 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:31:17.153647 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:31:17.178833 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:31:17.195855 sh[599]: Success Sep 9 00:31:17.250865 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 9 00:31:17.341816 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:31:17.352758 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:31:17.363790 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:31:17.390133 kernel: BTRFS info (device dm-0): first mount of filesystem 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a Sep 9 00:31:17.390208 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:31:17.390224 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:31:17.390603 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:31:17.393844 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:31:17.422646 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:31:17.424265 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:31:17.438828 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:31:17.442146 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:31:17.472597 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:31:17.472668 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:31:17.472684 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:31:17.478074 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:31:17.503755 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:31:17.508581 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:31:17.544891 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:31:17.582206 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:31:17.758540 ignition[695]: Ignition 2.19.0 Sep 9 00:31:17.758553 ignition[695]: Stage: fetch-offline Sep 9 00:31:17.758604 ignition[695]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:17.758616 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:17.758722 ignition[695]: parsed url from cmdline: "" Sep 9 00:31:17.758726 ignition[695]: no config URL provided Sep 9 00:31:17.758732 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:31:17.758742 ignition[695]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:31:17.758774 ignition[695]: op(1): [started] loading QEMU firmware config module Sep 9 00:31:17.758780 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:31:17.801039 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:31:17.824241 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:31:17.805506 ignition[695]: op(1): [finished] loading QEMU firmware config module Sep 9 00:31:17.866472 ignition[695]: parsing config with SHA512: bfc499631bc45049c785c03c0c7f350af01136549a91d94421adf77a13bd330460dfd90953df3db5659c6affab056bcff353513a59071f828d688d294c789a24 Sep 9 00:31:17.878326 systemd-networkd[786]: lo: Link UP Sep 9 00:31:17.880943 ignition[695]: fetch-offline: fetch-offline passed Sep 9 00:31:17.878340 systemd-networkd[786]: lo: Gained carrier Sep 9 00:31:17.887065 ignition[695]: Ignition finished successfully Sep 9 00:31:17.880227 unknown[695]: fetched base config from "system" Sep 9 00:31:17.880238 unknown[695]: fetched user config from "qemu" Sep 9 00:31:17.880855 systemd-networkd[786]: Enumeration completed Sep 9 00:31:17.881512 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:31:17.882252 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:31:17.882257 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:31:17.887325 systemd-networkd[786]: eth0: Link UP Sep 9 00:31:17.887331 systemd-networkd[786]: eth0: Gained carrier Sep 9 00:31:17.887345 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:31:17.887469 systemd[1]: Reached target network.target - Network. Sep 9 00:31:17.896757 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:31:17.944775 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:31:17.965661 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:31:17.965793 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:31:18.010612 ignition[790]: Ignition 2.19.0 Sep 9 00:31:18.010637 ignition[790]: Stage: kargs Sep 9 00:31:18.010863 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:18.010879 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:18.012208 ignition[790]: kargs: kargs passed Sep 9 00:31:18.012276 ignition[790]: Ignition finished successfully Sep 9 00:31:18.019820 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:31:18.033125 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:31:18.276754 ignition[799]: Ignition 2.19.0 Sep 9 00:31:18.276846 ignition[799]: Stage: disks Sep 9 00:31:18.350271 ignition[799]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:18.350304 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:18.354172 ignition[799]: disks: disks passed Sep 9 00:31:18.355090 ignition[799]: Ignition finished successfully Sep 9 00:31:18.359824 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:31:18.362858 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:31:18.365767 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:31:18.369142 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:31:18.373259 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:31:18.375875 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:31:18.389650 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:31:18.424623 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:31:18.475750 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:31:18.491366 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:31:18.737721 kernel: EXT4-fs (vda9): mounted filesystem ee55a213-d578-493d-a79b-e10c399cd35c r/w with ordered data mode. Quota mode: none. Sep 9 00:31:18.749074 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:31:18.754541 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:31:18.779634 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:31:18.804134 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:31:18.820263 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Sep 9 00:31:18.820316 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:31:18.820335 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:31:18.820351 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:31:18.818596 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:31:18.818688 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:31:18.818739 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:31:18.842775 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:31:18.851776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:31:18.862870 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:31:18.876639 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:31:18.963760 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:31:18.975034 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:31:18.986465 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:31:19.005498 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:31:19.025736 systemd-networkd[786]: eth0: Gained IPv6LL Sep 9 00:31:19.368482 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:31:19.395137 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:31:19.410369 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:31:19.423405 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:31:19.454097 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:31:19.594171 ignition[930]: INFO : Ignition 2.19.0 Sep 9 00:31:19.594171 ignition[930]: INFO : Stage: mount Sep 9 00:31:19.594171 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:19.594171 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:19.594171 ignition[930]: INFO : mount: mount passed Sep 9 00:31:19.594171 ignition[930]: INFO : Ignition finished successfully Sep 9 00:31:19.601764 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:31:19.639184 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:31:19.642981 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:31:19.759880 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:31:19.772871 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Sep 9 00:31:19.772940 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:31:19.775296 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:31:19.775343 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:31:19.789700 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:31:19.797925 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:31:19.855772 ignition[961]: INFO : Ignition 2.19.0 Sep 9 00:31:19.855772 ignition[961]: INFO : Stage: files Sep 9 00:31:19.867586 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:19.867586 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:19.867586 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:31:19.926114 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:31:19.926114 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:31:19.944318 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:31:19.950647 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:31:19.954872 unknown[961]: wrote ssh authorized keys file for user: core Sep 9 00:31:19.960765 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:31:19.968685 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 9 00:31:19.968685 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 9 00:31:19.968685 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 00:31:19.968685 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 00:31:20.049275 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:31:20.512226 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 00:31:20.512226 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:31:20.538960 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 00:31:20.925641 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:31:23.313738 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 00:31:23.313738 ignition[961]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 9 00:31:23.324545 ignition[961]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 9 00:31:23.348033 ignition[961]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:31:23.479230 ignition[961]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:31:23.679555 ignition[961]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:31:23.679555 ignition[961]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:31:23.687069 ignition[961]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:31:23.687069 ignition[961]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:31:23.687069 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:31:23.687069 ignition[961]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:31:23.687069 ignition[961]: INFO : files: files passed Sep 9 00:31:23.687069 ignition[961]: INFO : Ignition finished successfully Sep 9 00:31:23.704770 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:31:23.725126 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:31:23.732309 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:31:23.740694 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:31:23.741513 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:31:23.784967 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:31:23.796616 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:31:23.796616 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:31:23.802798 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:31:23.807653 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:31:23.812453 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:31:23.833326 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:31:23.919161 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:31:23.919363 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:31:23.934242 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:31:23.938634 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:31:23.949160 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:31:23.982304 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:31:24.037680 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:31:24.061586 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:31:24.097635 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:31:24.109550 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:31:24.115462 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:31:24.115742 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:31:24.115949 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:31:24.118992 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:31:24.119548 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:31:24.120055 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:31:24.120559 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:31:24.124278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:31:24.128885 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:31:24.129743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:31:24.130978 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:31:24.132557 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:31:24.132734 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:31:24.132856 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:31:24.136925 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:31:24.146384 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:31:24.146943 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:31:24.147265 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:31:24.156759 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:31:24.160157 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:31:24.160380 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:31:24.160752 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:31:24.160960 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:31:24.161208 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:31:24.161319 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:31:24.190013 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:31:24.192746 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:31:24.200811 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:31:24.202454 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:31:24.203384 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:31:24.256399 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:31:24.260839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:31:24.268568 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:31:24.268755 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:31:24.283797 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:31:24.284016 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:31:24.298742 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:31:24.306374 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:31:24.306650 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:31:24.340056 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:31:24.342391 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:31:24.342643 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:31:24.355252 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:31:24.355548 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:31:24.370448 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:31:24.370636 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:31:24.378053 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:31:24.391451 ignition[1016]: INFO : Ignition 2.19.0 Sep 9 00:31:24.391451 ignition[1016]: INFO : Stage: umount Sep 9 00:31:24.397057 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:31:24.397057 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:31:24.397057 ignition[1016]: INFO : umount: umount passed Sep 9 00:31:24.397057 ignition[1016]: INFO : Ignition finished successfully Sep 9 00:31:24.395189 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:31:24.395353 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:31:24.402295 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:31:24.402579 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:31:24.408050 systemd[1]: Stopped target network.target - Network. Sep 9 00:31:24.408336 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:31:24.408452 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:31:24.408788 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:31:24.408856 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:31:24.410091 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:31:24.410151 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:31:24.410571 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:31:24.410642 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:31:24.410963 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:31:24.411017 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:31:24.411631 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:31:24.414958 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:31:24.427032 systemd-networkd[786]: eth0: DHCPv6 lease lost Sep 9 00:31:24.429692 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:31:24.429857 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:31:24.529767 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:31:24.530056 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:31:24.549295 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:31:24.549444 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:31:24.601653 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:31:24.648724 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:31:24.650705 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:31:24.679401 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:31:24.682684 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:31:24.688829 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:31:24.688949 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:31:24.700091 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:31:24.700215 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:31:24.702307 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:31:24.740593 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:31:24.740918 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:31:24.747830 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:31:24.747999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:31:24.754911 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:31:24.755057 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:31:24.758154 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:31:24.758221 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:31:24.764333 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:31:24.764489 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:31:24.776233 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:31:24.776349 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:31:24.776540 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:31:24.776617 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:31:24.798511 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:31:24.803514 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:31:24.803626 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:31:24.826026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:31:24.826139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:24.828541 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:31:24.828711 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:31:24.846447 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:31:24.895832 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:31:24.927779 systemd[1]: Switching root. Sep 9 00:31:24.986493 systemd-journald[194]: Journal stopped Sep 9 00:31:28.428459 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Sep 9 00:31:28.428550 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:31:28.428575 kernel: SELinux: policy capability open_perms=1 Sep 9 00:31:28.428596 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:31:28.428612 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:31:28.428627 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:31:28.428642 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:31:28.428658 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:31:28.428673 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:31:28.428695 kernel: audit: type=1403 audit(1757377886.026:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:31:28.428726 systemd[1]: Successfully loaded SELinux policy in 67.477ms. Sep 9 00:31:28.428758 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.720ms. Sep 9 00:31:28.428775 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:31:28.428792 systemd[1]: Detected virtualization kvm. Sep 9 00:31:28.428808 systemd[1]: Detected architecture x86-64. Sep 9 00:31:28.428826 systemd[1]: Detected first boot. Sep 9 00:31:28.428843 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:31:28.428859 zram_generator::config[1078]: No configuration found. Sep 9 00:31:28.428882 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:31:28.428899 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:31:28.428915 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:31:28.428932 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:31:28.428948 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:31:28.428970 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:31:28.428998 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:31:28.429018 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:31:28.429035 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:31:28.429061 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:31:28.429077 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:31:28.429093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:31:28.429110 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:31:28.429126 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:31:28.429142 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:31:28.429159 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:31:28.429175 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:31:28.429191 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:31:28.429216 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:31:28.429232 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:31:28.429249 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:31:28.429265 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:31:28.429281 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:31:28.429297 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:31:28.429313 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:31:28.429330 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:31:28.429352 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:31:28.429368 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:31:28.429384 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:31:28.429400 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:31:28.429431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:31:28.429453 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:31:28.429469 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:31:28.429485 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:31:28.429501 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:31:28.429525 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:28.429541 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:31:28.429563 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:31:28.429580 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:31:28.429599 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:31:28.429616 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:31:28.429634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:31:28.429651 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:31:28.429691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:31:28.429728 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:31:28.429745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:31:28.429764 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:31:28.429781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:31:28.429800 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:31:28.429817 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 9 00:31:28.429835 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 9 00:31:28.429863 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:31:28.429881 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:31:28.429899 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:31:28.429917 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:31:28.429934 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:31:28.429952 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:28.429969 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:31:28.429985 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:31:28.430002 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:31:28.430030 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:31:28.430048 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:31:28.430065 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:31:28.430124 systemd-journald[1155]: Collecting audit messages is disabled. Sep 9 00:31:28.430153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:31:28.430171 kernel: loop: module loaded Sep 9 00:31:28.431530 systemd-journald[1155]: Journal started Sep 9 00:31:28.431573 systemd-journald[1155]: Runtime Journal (/run/log/journal/aa19e5253e614273a78b09d46575e853) is 6.0M, max 48.3M, 42.2M free. Sep 9 00:31:28.454523 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:31:28.459240 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:31:28.459992 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:31:28.463012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:31:28.463323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:31:28.467633 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:31:28.467990 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:31:28.472837 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:31:28.477761 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:31:28.482236 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:31:28.488798 kernel: fuse: init (API version 7.39) Sep 9 00:31:28.485551 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:31:28.485879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:31:28.525464 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:31:28.525903 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:31:28.547678 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:31:28.578501 kernel: ACPI: bus type drm_connector registered Sep 9 00:31:28.582647 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:31:28.599661 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:31:28.611147 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:31:28.684977 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:31:28.701136 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:31:28.727814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:31:28.740103 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:31:28.762486 systemd-journald[1155]: Time spent on flushing to /var/log/journal/aa19e5253e614273a78b09d46575e853 is 27.047ms for 976 entries. Sep 9 00:31:28.762486 systemd-journald[1155]: System Journal (/var/log/journal/aa19e5253e614273a78b09d46575e853) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:31:29.023218 systemd-journald[1155]: Received client request to flush runtime journal. Sep 9 00:31:28.766172 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:31:28.769453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:31:28.778073 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:31:28.863007 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:31:28.863382 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:31:28.866546 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:31:28.870659 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:31:28.876267 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:31:29.000904 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:31:29.004501 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:31:29.017687 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:31:29.038335 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:31:29.044229 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:31:29.057486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:31:29.068078 udevadm[1224]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 9 00:31:29.070716 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Sep 9 00:31:29.070750 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Sep 9 00:31:29.097354 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:31:29.118081 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:31:29.185743 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:31:29.199852 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:31:29.349055 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Sep 9 00:31:29.349562 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Sep 9 00:31:29.358714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:31:30.724317 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:31:30.742844 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:31:30.779144 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Sep 9 00:31:30.867342 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:31:30.901770 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:31:30.933693 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:31:30.955280 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 9 00:31:31.177694 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1263) Sep 9 00:31:31.389075 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 00:31:31.391114 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:31:31.396824 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:31:31.398940 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:31:31.452811 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 00:31:31.466875 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:31:31.467120 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 9 00:31:31.467480 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:31:31.467704 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 9 00:31:31.840997 systemd-networkd[1247]: lo: Link UP Sep 9 00:31:31.841484 systemd-networkd[1247]: lo: Gained carrier Sep 9 00:31:31.846107 systemd-networkd[1247]: Enumeration completed Sep 9 00:31:31.846902 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:31:31.846969 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:31:31.850616 systemd-networkd[1247]: eth0: Link UP Sep 9 00:31:31.851277 systemd-networkd[1247]: eth0: Gained carrier Sep 9 00:31:31.851345 systemd-networkd[1247]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:31:31.867450 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:31:31.931514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:31:31.944758 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:31:32.033802 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:31:32.037573 systemd-networkd[1247]: eth0: DHCPv4 address 10.0.0.108/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:31:32.288126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:31:32.323002 kernel: kvm_amd: TSC scaling supported Sep 9 00:31:32.323116 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:31:32.323135 kernel: kvm_amd: Nested Paging enabled Sep 9 00:31:32.323578 kernel: kvm_amd: LBR virtualization supported Sep 9 00:31:32.325302 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:31:32.325339 kernel: kvm_amd: Virtual GIF supported Sep 9 00:31:32.518022 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:31:32.701689 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:31:32.730781 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:31:32.757340 lvm[1291]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:31:32.872844 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:31:32.876089 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:31:32.898743 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:31:32.919803 lvm[1294]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:31:32.961874 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:31:32.966528 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:31:32.968629 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:31:32.968674 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:31:32.970683 systemd[1]: Reached target machines.target - Containers. Sep 9 00:31:32.975001 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 00:31:32.987705 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:31:32.995178 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:31:32.997584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:31:33.014899 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:31:33.039069 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 00:31:33.048929 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:31:33.051587 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:31:33.083983 kernel: loop0: detected capacity change from 0 to 140768 Sep 9 00:31:33.105254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:31:33.106081 systemd-networkd[1247]: eth0: Gained IPv6LL Sep 9 00:31:33.114210 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:31:33.132859 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:31:33.135777 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 00:31:33.157326 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:31:33.200864 kernel: loop1: detected capacity change from 0 to 142488 Sep 9 00:31:33.329656 kernel: loop2: detected capacity change from 0 to 221472 Sep 9 00:31:33.476660 kernel: loop3: detected capacity change from 0 to 140768 Sep 9 00:31:33.558462 kernel: loop4: detected capacity change from 0 to 142488 Sep 9 00:31:33.633292 kernel: loop5: detected capacity change from 0 to 221472 Sep 9 00:31:33.665085 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:31:33.668186 (sd-merge)[1317]: Merged extensions into '/usr'. Sep 9 00:31:33.685507 systemd[1]: Reloading requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:31:33.685533 systemd[1]: Reloading... Sep 9 00:31:33.869816 zram_generator::config[1340]: No configuration found. Sep 9 00:31:34.368069 ldconfig[1298]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:31:34.431744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:31:34.544121 systemd[1]: Reloading finished in 857 ms. Sep 9 00:31:34.578429 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:31:34.581304 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:31:34.602704 systemd[1]: Starting ensure-sysext.service... Sep 9 00:31:34.611779 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:31:34.620152 systemd[1]: Reloading requested from client PID 1389 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:31:34.620174 systemd[1]: Reloading... Sep 9 00:31:34.656449 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:31:34.657205 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:31:34.658484 systemd-tmpfiles[1390]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:31:34.658882 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Sep 9 00:31:34.658979 systemd-tmpfiles[1390]: ACLs are not supported, ignoring. Sep 9 00:31:34.666361 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:31:34.666378 systemd-tmpfiles[1390]: Skipping /boot Sep 9 00:31:34.684848 systemd-tmpfiles[1390]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:31:34.685584 systemd-tmpfiles[1390]: Skipping /boot Sep 9 00:31:34.782579 zram_generator::config[1417]: No configuration found. Sep 9 00:31:35.021848 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:31:35.161509 systemd[1]: Reloading finished in 540 ms. Sep 9 00:31:35.192523 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:31:35.258903 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:31:35.280608 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:31:35.295877 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:31:35.308395 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:31:35.318920 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:31:35.327408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:35.327670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:31:35.331533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:31:35.342933 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:31:35.353202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:31:35.355103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:31:35.355264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:35.360337 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:31:35.360691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:31:35.367404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:31:35.367807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:31:35.388881 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:31:35.406393 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:31:35.407595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:31:35.414391 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:35.420970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:31:35.430135 augenrules[1495]: No rules Sep 9 00:31:35.435563 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:31:35.445933 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:31:35.467630 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:31:35.469540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:31:35.469759 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:31:35.472073 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:31:35.474863 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:31:35.477614 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:31:35.478048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:31:35.480482 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:31:35.480769 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:31:35.486341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:31:35.487242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:31:35.495926 systemd[1]: Finished ensure-sysext.service. Sep 9 00:31:35.515981 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:31:35.516125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:31:35.538772 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:31:35.552952 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:31:35.555978 systemd-resolved[1467]: Positive Trust Anchors: Sep 9 00:31:35.556006 systemd-resolved[1467]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:31:35.556053 systemd-resolved[1467]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:31:35.570277 systemd-resolved[1467]: Defaulting to hostname 'linux'. Sep 9 00:31:35.575390 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:31:35.579221 systemd[1]: Reached target network.target - Network. Sep 9 00:31:35.584371 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:31:35.585758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:31:35.594457 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:31:35.604869 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:31:35.608887 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:31:35.729842 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:31:35.731875 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:31:35.737813 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:31:35.738609 systemd-timesyncd[1516]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:31:35.738688 systemd-timesyncd[1516]: Initial clock synchronization to Tue 2025-09-09 00:31:36.078783 UTC. Sep 9 00:31:35.743811 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:31:35.753359 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:31:35.755029 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:31:35.755072 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:31:35.762984 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:31:35.764983 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:31:35.772662 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:31:35.774584 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:31:35.791487 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:31:35.803002 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:31:35.807299 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:31:35.812774 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:31:35.820791 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:31:35.851456 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:31:35.853288 systemd[1]: System is tainted: cgroupsv1 Sep 9 00:31:35.853358 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:31:35.853385 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:31:35.863778 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:31:35.883906 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:31:35.903878 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:31:35.922698 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:31:35.939045 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:31:35.939644 jq[1530]: false Sep 9 00:31:35.943941 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:31:35.953032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:31:35.969798 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:31:35.979647 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:31:35.986823 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:31:35.994024 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:31:35.999433 extend-filesystems[1531]: Found loop3 Sep 9 00:31:36.006907 extend-filesystems[1531]: Found loop4 Sep 9 00:31:36.006907 extend-filesystems[1531]: Found loop5 Sep 9 00:31:36.006907 extend-filesystems[1531]: Found sr0 Sep 9 00:31:36.006907 extend-filesystems[1531]: Found vda Sep 9 00:31:36.006907 extend-filesystems[1531]: Found vda1 Sep 9 00:31:36.006907 extend-filesystems[1531]: Found vda2 Sep 9 00:31:36.006907 extend-filesystems[1531]: Found vda3 Sep 9 00:31:36.006907 extend-filesystems[1531]: Found usr Sep 9 00:31:36.006907 extend-filesystems[1531]: Found vda4 Sep 9 00:31:36.006907 extend-filesystems[1531]: Found vda6 Sep 9 00:31:36.006907 extend-filesystems[1531]: Found vda7 Sep 9 00:31:36.006907 extend-filesystems[1531]: Found vda9 Sep 9 00:31:36.006907 extend-filesystems[1531]: Checking size of /dev/vda9 Sep 9 00:31:36.128657 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1562) Sep 9 00:31:36.003162 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:31:36.128938 extend-filesystems[1531]: Resized partition /dev/vda9 Sep 9 00:31:36.051036 dbus-daemon[1529]: [system] SELinux support is enabled Sep 9 00:31:36.015951 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:31:36.140964 extend-filesystems[1565]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:31:36.152005 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:31:36.041827 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:31:36.050745 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:31:36.066098 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:31:36.140528 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:31:36.152863 jq[1567]: true Sep 9 00:31:36.160670 update_engine[1557]: I20250909 00:31:36.160408 1557 main.cc:92] Flatcar Update Engine starting Sep 9 00:31:36.165335 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:31:36.165771 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:31:36.173125 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:31:36.174719 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:31:36.178046 update_engine[1557]: I20250909 00:31:36.177910 1557 update_check_scheduler.cc:74] Next update check in 4m0s Sep 9 00:31:36.180575 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:31:36.233796 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:31:36.234410 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:31:36.241791 systemd-logind[1548]: Watching system buttons on /dev/input/event1 (Power Button) Sep 9 00:31:36.242308 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:31:36.242988 systemd-logind[1548]: New seat seat0. Sep 9 00:31:36.323819 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:31:36.364631 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:31:36.371542 extend-filesystems[1565]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:31:36.371542 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:31:36.371542 extend-filesystems[1565]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:31:36.423743 extend-filesystems[1531]: Resized filesystem in /dev/vda9 Sep 9 00:31:36.389289 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:31:36.393203 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:31:36.438667 jq[1585]: true Sep 9 00:31:36.459237 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:31:36.459803 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:31:36.493997 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:31:36.550941 tar[1577]: linux-amd64/helm Sep 9 00:31:36.550700 dbus-daemon[1529]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 00:31:36.551654 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:31:36.573662 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:31:36.707835 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:31:36.708158 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:31:36.708373 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:31:36.721007 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:31:36.721233 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:31:36.728937 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:31:36.737923 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:31:36.807758 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:31:37.130607 locksmithd[1628]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:31:37.699356 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:31:37.709018 bash[1623]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:31:37.702648 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:31:37.712175 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:31:37.726939 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:31:37.727359 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:31:37.750170 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:31:37.979897 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:31:38.003945 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:31:38.014353 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:31:38.016499 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:31:38.815301 containerd[1592]: time="2025-09-09T00:31:38.815151613Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 00:31:38.941929 containerd[1592]: time="2025-09-09T00:31:38.941827988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:38.947970 containerd[1592]: time="2025-09-09T00:31:38.947898098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:31:38.947970 containerd[1592]: time="2025-09-09T00:31:38.947955855Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:31:38.947970 containerd[1592]: time="2025-09-09T00:31:38.947979339Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:31:38.948271 containerd[1592]: time="2025-09-09T00:31:38.948243762Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:31:38.948271 containerd[1592]: time="2025-09-09T00:31:38.948268269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:38.949151 containerd[1592]: time="2025-09-09T00:31:38.948353659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:31:38.949151 containerd[1592]: time="2025-09-09T00:31:38.948372673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:38.949151 containerd[1592]: time="2025-09-09T00:31:38.948764569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:31:38.949151 containerd[1592]: time="2025-09-09T00:31:38.948785590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:38.949151 containerd[1592]: time="2025-09-09T00:31:38.948803860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:31:38.949151 containerd[1592]: time="2025-09-09T00:31:38.948820133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:38.949151 containerd[1592]: time="2025-09-09T00:31:38.948989297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:38.950690 containerd[1592]: time="2025-09-09T00:31:38.950613544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:31:38.951044 containerd[1592]: time="2025-09-09T00:31:38.951001394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:31:38.951044 containerd[1592]: time="2025-09-09T00:31:38.951030403Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:31:38.951247 containerd[1592]: time="2025-09-09T00:31:38.951193350Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:31:38.951931 containerd[1592]: time="2025-09-09T00:31:38.951888991Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:31:39.155252 containerd[1592]: time="2025-09-09T00:31:39.154082681Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:31:39.155252 containerd[1592]: time="2025-09-09T00:31:39.154210311Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:31:39.155252 containerd[1592]: time="2025-09-09T00:31:39.154233557Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:31:39.155252 containerd[1592]: time="2025-09-09T00:31:39.154407217Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:31:39.155252 containerd[1592]: time="2025-09-09T00:31:39.154460388Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:31:39.155252 containerd[1592]: time="2025-09-09T00:31:39.154730250Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.165526598Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.165873207Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.165908004Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.165934919Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.165970336Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.165998043Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.166022342Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.166052904Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.166084292Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.166125684Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.166170642Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.166199515Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.166239269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.167466 containerd[1592]: time="2025-09-09T00:31:39.166266720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166284681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166311122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166347290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166373113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166395659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166419555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166463390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166494973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166517643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166542270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166564301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166589103Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166622232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166645870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.168061 containerd[1592]: time="2025-09-09T00:31:39.166671878Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:31:39.169969 containerd[1592]: time="2025-09-09T00:31:39.166769904Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:31:39.169969 containerd[1592]: time="2025-09-09T00:31:39.166805877Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:31:39.169969 containerd[1592]: time="2025-09-09T00:31:39.166829021Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:31:39.169969 containerd[1592]: time="2025-09-09T00:31:39.166854658Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:31:39.169969 containerd[1592]: time="2025-09-09T00:31:39.166878100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.169969 containerd[1592]: time="2025-09-09T00:31:39.166901934Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:31:39.169969 containerd[1592]: time="2025-09-09T00:31:39.166927293Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:31:39.169969 containerd[1592]: time="2025-09-09T00:31:39.166950519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:31:39.183627 containerd[1592]: time="2025-09-09T00:31:39.170924386Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:31:39.183627 containerd[1592]: time="2025-09-09T00:31:39.171063196Z" level=info msg="Connect containerd service" Sep 9 00:31:39.183627 containerd[1592]: time="2025-09-09T00:31:39.171153555Z" level=info msg="using legacy CRI server" Sep 9 00:31:39.183627 containerd[1592]: time="2025-09-09T00:31:39.171167012Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:31:39.283086 containerd[1592]: time="2025-09-09T00:31:39.282986838Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:31:39.294023 containerd[1592]: time="2025-09-09T00:31:39.286117661Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:31:39.294023 containerd[1592]: time="2025-09-09T00:31:39.286540583Z" level=info msg="Start subscribing containerd event" Sep 9 00:31:39.294023 containerd[1592]: time="2025-09-09T00:31:39.286617186Z" level=info msg="Start recovering state" Sep 9 00:31:39.294023 containerd[1592]: time="2025-09-09T00:31:39.287803726Z" level=info msg="Start event monitor" Sep 9 00:31:39.294023 containerd[1592]: time="2025-09-09T00:31:39.287847973Z" level=info msg="Start snapshots syncer" Sep 9 00:31:39.294023 containerd[1592]: time="2025-09-09T00:31:39.287862090Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:31:39.294023 containerd[1592]: time="2025-09-09T00:31:39.287872136Z" level=info msg="Start streaming server" Sep 9 00:31:39.294023 containerd[1592]: time="2025-09-09T00:31:39.288160783Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:31:39.294023 containerd[1592]: time="2025-09-09T00:31:39.288251081Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:31:39.294023 containerd[1592]: time="2025-09-09T00:31:39.292866827Z" level=info msg="containerd successfully booted in 0.481775s" Sep 9 00:31:39.288522 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:31:40.380481 tar[1577]: linux-amd64/LICENSE Sep 9 00:31:40.380481 tar[1577]: linux-amd64/README.md Sep 9 00:31:40.435673 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:31:41.355063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:31:41.369525 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:31:41.452104 systemd[1]: Startup finished in 15.393s (kernel) + 15.491s (userspace) = 30.885s. Sep 9 00:31:41.459379 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:31:42.979518 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:31:43.064998 systemd[1]: Started sshd@0-10.0.0.108:22-10.0.0.1:58014.service - OpenSSH per-connection server daemon (10.0.0.1:58014). Sep 9 00:31:43.295791 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 58014 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:31:43.301606 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:43.334675 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:31:43.412011 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:31:43.418224 systemd-logind[1548]: New session 1 of user core. Sep 9 00:31:43.479796 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:31:43.510704 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:31:43.518232 kubelet[1674]: E0909 00:31:43.512310 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:31:43.532116 (systemd)[1691]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:31:43.543728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:31:43.544094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:31:43.774987 systemd[1691]: Queued start job for default target default.target. Sep 9 00:31:43.777188 systemd[1691]: Created slice app.slice - User Application Slice. Sep 9 00:31:43.777232 systemd[1691]: Reached target paths.target - Paths. Sep 9 00:31:43.777253 systemd[1691]: Reached target timers.target - Timers. Sep 9 00:31:43.789718 systemd[1691]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:31:43.817645 systemd[1691]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:31:43.820734 systemd[1691]: Reached target sockets.target - Sockets. Sep 9 00:31:43.820765 systemd[1691]: Reached target basic.target - Basic System. Sep 9 00:31:43.820869 systemd[1691]: Reached target default.target - Main User Target. Sep 9 00:31:43.820926 systemd[1691]: Startup finished in 265ms. Sep 9 00:31:43.821124 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:31:43.834153 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:31:43.944426 systemd[1]: Started sshd@1-10.0.0.108:22-10.0.0.1:58020.service - OpenSSH per-connection server daemon (10.0.0.1:58020). Sep 9 00:31:44.090941 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 58020 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:31:44.088135 systemd-logind[1548]: New session 2 of user core. Sep 9 00:31:44.080258 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:44.123501 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:31:44.244807 sshd[1705]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:44.264905 systemd[1]: Started sshd@2-10.0.0.108:22-10.0.0.1:58024.service - OpenSSH per-connection server daemon (10.0.0.1:58024). Sep 9 00:31:44.265624 systemd[1]: sshd@1-10.0.0.108:22-10.0.0.1:58020.service: Deactivated successfully. Sep 9 00:31:44.292967 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:31:44.299466 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:31:44.302759 systemd-logind[1548]: Removed session 2. Sep 9 00:31:44.362814 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 58024 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:31:44.365799 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:44.382604 systemd-logind[1548]: New session 3 of user core. Sep 9 00:31:44.393026 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:31:44.473080 sshd[1710]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:44.508445 systemd[1]: Started sshd@3-10.0.0.108:22-10.0.0.1:58028.service - OpenSSH per-connection server daemon (10.0.0.1:58028). Sep 9 00:31:44.510663 systemd[1]: sshd@2-10.0.0.108:22-10.0.0.1:58024.service: Deactivated successfully. Sep 9 00:31:44.517130 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:31:44.519234 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:31:44.522489 systemd-logind[1548]: Removed session 3. Sep 9 00:31:44.597910 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 58028 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:31:44.597509 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:44.615402 systemd-logind[1548]: New session 4 of user core. Sep 9 00:31:44.636164 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:31:44.814314 sshd[1719]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:44.829865 systemd[1]: Started sshd@4-10.0.0.108:22-10.0.0.1:58030.service - OpenSSH per-connection server daemon (10.0.0.1:58030). Sep 9 00:31:44.830612 systemd[1]: sshd@3-10.0.0.108:22-10.0.0.1:58028.service: Deactivated successfully. Sep 9 00:31:44.839927 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:31:44.842688 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:31:44.846580 systemd-logind[1548]: Removed session 4. Sep 9 00:31:44.946680 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 58030 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:31:44.952247 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:44.985491 systemd-logind[1548]: New session 5 of user core. Sep 9 00:31:44.993055 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:31:45.087387 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:31:45.087912 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:31:45.140458 sudo[1733]: pam_unix(sudo:session): session closed for user root Sep 9 00:31:45.147845 sshd[1726]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:45.175988 systemd[1]: Started sshd@5-10.0.0.108:22-10.0.0.1:58042.service - OpenSSH per-connection server daemon (10.0.0.1:58042). Sep 9 00:31:45.183834 systemd[1]: sshd@4-10.0.0.108:22-10.0.0.1:58030.service: Deactivated successfully. Sep 9 00:31:45.193706 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:31:45.207808 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:31:45.219585 systemd-logind[1548]: Removed session 5. Sep 9 00:31:45.254027 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 58042 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:31:45.258542 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:45.293425 systemd-logind[1548]: New session 6 of user core. Sep 9 00:31:45.303056 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:31:45.381277 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:31:45.381843 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:31:45.404975 sudo[1743]: pam_unix(sudo:session): session closed for user root Sep 9 00:31:45.415361 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 00:31:45.418063 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:31:45.480194 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 00:31:45.500922 auditctl[1746]: No rules Sep 9 00:31:45.507299 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:31:45.508750 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 00:31:45.538543 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:31:45.680360 augenrules[1765]: No rules Sep 9 00:31:45.684708 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:31:45.688148 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 9 00:31:45.696630 sshd[1735]: pam_unix(sshd:session): session closed for user core Sep 9 00:31:45.705243 systemd[1]: Started sshd@6-10.0.0.108:22-10.0.0.1:58052.service - OpenSSH per-connection server daemon (10.0.0.1:58052). Sep 9 00:31:45.706056 systemd[1]: sshd@5-10.0.0.108:22-10.0.0.1:58042.service: Deactivated successfully. Sep 9 00:31:45.719558 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:31:45.722086 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:31:45.728315 systemd-logind[1548]: Removed session 6. Sep 9 00:31:45.781674 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 58052 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:31:45.785991 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:31:45.807724 systemd-logind[1548]: New session 7 of user core. Sep 9 00:31:45.827124 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:31:45.910954 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:31:45.911797 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:31:48.918898 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:31:48.948100 (dockerd)[1796]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:31:51.030156 dockerd[1796]: time="2025-09-09T00:31:51.030052534Z" level=info msg="Starting up" Sep 9 00:31:53.009629 systemd[1]: var-lib-docker-metacopy\x2dcheck3582105999-merged.mount: Deactivated successfully. Sep 9 00:31:53.099078 dockerd[1796]: time="2025-09-09T00:31:53.097981326Z" level=info msg="Loading containers: start." Sep 9 00:31:53.795836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:31:53.906876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:31:54.384701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:31:54.388551 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:31:54.739578 kubelet[1864]: E0909 00:31:54.735548 1864 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:31:54.755088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:31:54.758449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:31:54.810881 kernel: Initializing XFRM netlink socket Sep 9 00:31:55.156862 systemd-networkd[1247]: docker0: Link UP Sep 9 00:31:55.219248 dockerd[1796]: time="2025-09-09T00:31:55.216839508Z" level=info msg="Loading containers: done." Sep 9 00:31:55.304272 dockerd[1796]: time="2025-09-09T00:31:55.303913971Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:31:55.305804 dockerd[1796]: time="2025-09-09T00:31:55.305119518Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 00:31:55.305804 dockerd[1796]: time="2025-09-09T00:31:55.305367181Z" level=info msg="Daemon has completed initialization" Sep 9 00:31:55.464092 dockerd[1796]: time="2025-09-09T00:31:55.463864759Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:31:55.466584 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:31:57.636681 containerd[1592]: time="2025-09-09T00:31:57.635819930Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 00:31:59.388523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2619812653.mount: Deactivated successfully. Sep 9 00:32:03.104141 containerd[1592]: time="2025-09-09T00:32:03.104043520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:03.106514 containerd[1592]: time="2025-09-09T00:32:03.106422205Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 9 00:32:03.108572 containerd[1592]: time="2025-09-09T00:32:03.108521954Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:03.112501 containerd[1592]: time="2025-09-09T00:32:03.112344350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:03.114668 containerd[1592]: time="2025-09-09T00:32:03.114567083Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 5.478250099s" Sep 9 00:32:03.114668 containerd[1592]: time="2025-09-09T00:32:03.114658138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 00:32:03.116000 containerd[1592]: time="2025-09-09T00:32:03.115924172Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 00:32:04.938291 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:32:05.025768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:05.243666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:05.284304 (kubelet)[2035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:32:06.195613 kubelet[2035]: E0909 00:32:06.194492 2035 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:32:06.201039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:32:06.201462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:32:06.405182 containerd[1592]: time="2025-09-09T00:32:06.405086829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:06.450800 containerd[1592]: time="2025-09-09T00:32:06.450581897Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 9 00:32:06.535332 containerd[1592]: time="2025-09-09T00:32:06.535238792Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:06.690462 containerd[1592]: time="2025-09-09T00:32:06.690356943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:06.691742 containerd[1592]: time="2025-09-09T00:32:06.691694666Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 3.575687668s" Sep 9 00:32:06.691801 containerd[1592]: time="2025-09-09T00:32:06.691749932Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 00:32:06.692273 containerd[1592]: time="2025-09-09T00:32:06.692251600Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 00:32:08.420972 containerd[1592]: time="2025-09-09T00:32:08.420883009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:08.421699 containerd[1592]: time="2025-09-09T00:32:08.421641700Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 9 00:32:08.424402 containerd[1592]: time="2025-09-09T00:32:08.424371551Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:08.428353 containerd[1592]: time="2025-09-09T00:32:08.428308282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:08.429694 containerd[1592]: time="2025-09-09T00:32:08.429655929Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.73735038s" Sep 9 00:32:08.429770 containerd[1592]: time="2025-09-09T00:32:08.429700003Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 00:32:08.430271 containerd[1592]: time="2025-09-09T00:32:08.430246889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 00:32:10.088467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount115958882.mount: Deactivated successfully. Sep 9 00:32:12.526233 containerd[1592]: time="2025-09-09T00:32:12.526145593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:12.584029 containerd[1592]: time="2025-09-09T00:32:12.583907555Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 9 00:32:12.654265 containerd[1592]: time="2025-09-09T00:32:12.654168249Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:12.728879 containerd[1592]: time="2025-09-09T00:32:12.728761948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:12.729612 containerd[1592]: time="2025-09-09T00:32:12.729534193Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 4.299251315s" Sep 9 00:32:12.729612 containerd[1592]: time="2025-09-09T00:32:12.729604005Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 00:32:12.730369 containerd[1592]: time="2025-09-09T00:32:12.730336786Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:32:14.201538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296494547.mount: Deactivated successfully. Sep 9 00:32:16.414601 containerd[1592]: time="2025-09-09T00:32:16.414540259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:16.415407 containerd[1592]: time="2025-09-09T00:32:16.415319328Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 00:32:16.416361 containerd[1592]: time="2025-09-09T00:32:16.416331493Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:16.419498 containerd[1592]: time="2025-09-09T00:32:16.419460007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:16.420601 containerd[1592]: time="2025-09-09T00:32:16.420555539Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.690184824s" Sep 9 00:32:16.420601 containerd[1592]: time="2025-09-09T00:32:16.420587467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 00:32:16.421227 containerd[1592]: time="2025-09-09T00:32:16.421193618Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:32:16.437987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:32:16.452634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:16.631234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:16.636083 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:32:16.675035 kubelet[2121]: E0909 00:32:16.674857 2121 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:32:16.679232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:32:16.679647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:32:18.451222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168909299.mount: Deactivated successfully. Sep 9 00:32:18.459015 containerd[1592]: time="2025-09-09T00:32:18.458964758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:18.459715 containerd[1592]: time="2025-09-09T00:32:18.459657060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:32:18.461048 containerd[1592]: time="2025-09-09T00:32:18.460994250Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:18.463336 containerd[1592]: time="2025-09-09T00:32:18.463278978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:18.463909 containerd[1592]: time="2025-09-09T00:32:18.463876922Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.042646554s" Sep 9 00:32:18.463909 containerd[1592]: time="2025-09-09T00:32:18.463909420Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:32:18.464508 containerd[1592]: time="2025-09-09T00:32:18.464476290Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 00:32:19.587871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197318230.mount: Deactivated successfully. Sep 9 00:32:21.567718 update_engine[1557]: I20250909 00:32:21.567582 1557 update_attempter.cc:509] Updating boot flags... Sep 9 00:32:22.279437 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2192) Sep 9 00:32:22.356443 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2191) Sep 9 00:32:22.420445 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2191) Sep 9 00:32:23.447316 containerd[1592]: time="2025-09-09T00:32:23.447214088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:23.536898 containerd[1592]: time="2025-09-09T00:32:23.536796376Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 9 00:32:23.610641 containerd[1592]: time="2025-09-09T00:32:23.610567435Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:23.656480 containerd[1592]: time="2025-09-09T00:32:23.656384769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:23.657736 containerd[1592]: time="2025-09-09T00:32:23.657669779Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.193162314s" Sep 9 00:32:23.657736 containerd[1592]: time="2025-09-09T00:32:23.657729285Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 00:32:26.014948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:26.026614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:26.051851 systemd[1]: Reloading requested from client PID 2234 ('systemctl') (unit session-7.scope)... Sep 9 00:32:26.051869 systemd[1]: Reloading... Sep 9 00:32:26.139465 zram_generator::config[2276]: No configuration found. Sep 9 00:32:26.699497 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:32:26.779309 systemd[1]: Reloading finished in 726 ms. Sep 9 00:32:26.831012 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:32:26.831144 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:32:26.831569 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:26.833464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:26.999739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:27.004319 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:32:27.042061 kubelet[2333]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:32:27.042061 kubelet[2333]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:32:27.042061 kubelet[2333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:32:27.042525 kubelet[2333]: I0909 00:32:27.042122 2333 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:32:27.277176 kubelet[2333]: I0909 00:32:27.277037 2333 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:32:27.277176 kubelet[2333]: I0909 00:32:27.277078 2333 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:32:27.277364 kubelet[2333]: I0909 00:32:27.277339 2333 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:32:27.297236 kubelet[2333]: E0909 00:32:27.297184 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:27.298059 kubelet[2333]: I0909 00:32:27.298038 2333 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:32:27.305879 kubelet[2333]: E0909 00:32:27.305836 2333 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:32:27.305879 kubelet[2333]: I0909 00:32:27.305870 2333 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:32:27.311937 kubelet[2333]: I0909 00:32:27.311902 2333 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:32:27.312288 kubelet[2333]: I0909 00:32:27.312255 2333 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:32:27.312475 kubelet[2333]: I0909 00:32:27.312429 2333 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:32:27.312663 kubelet[2333]: I0909 00:32:27.312461 2333 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 9 00:32:27.312797 kubelet[2333]: I0909 00:32:27.312672 2333 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:32:27.312797 kubelet[2333]: I0909 00:32:27.312686 2333 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:32:27.312877 kubelet[2333]: I0909 00:32:27.312853 2333 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:32:27.314890 kubelet[2333]: I0909 00:32:27.314861 2333 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:32:27.314890 kubelet[2333]: I0909 00:32:27.314883 2333 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:32:27.314979 kubelet[2333]: I0909 00:32:27.314927 2333 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:32:27.314979 kubelet[2333]: I0909 00:32:27.314956 2333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:32:27.319460 kubelet[2333]: I0909 00:32:27.318283 2333 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:32:27.319460 kubelet[2333]: I0909 00:32:27.318706 2333 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:32:27.319460 kubelet[2333]: W0909 00:32:27.318762 2333 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:32:27.319920 kubelet[2333]: W0909 00:32:27.319870 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:27.319952 kubelet[2333]: E0909 00:32:27.319923 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:27.321288 kubelet[2333]: I0909 00:32:27.321272 2333 server.go:1274] "Started kubelet" Sep 9 00:32:27.323201 kubelet[2333]: I0909 00:32:27.323041 2333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:32:27.323245 kubelet[2333]: I0909 00:32:27.323204 2333 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:32:27.324562 kubelet[2333]: W0909 00:32:27.324512 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:27.324628 kubelet[2333]: E0909 00:32:27.324576 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:27.324628 kubelet[2333]: I0909 00:32:27.324547 2333 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:32:27.324628 kubelet[2333]: I0909 00:32:27.324588 2333 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:32:27.325888 kubelet[2333]: I0909 00:32:27.325845 2333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:32:27.326861 kubelet[2333]: I0909 00:32:27.326840 2333 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:32:27.329137 kubelet[2333]: I0909 00:32:27.329106 2333 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:32:27.330502 kubelet[2333]: E0909 00:32:27.329499 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:27.330502 kubelet[2333]: I0909 00:32:27.329881 2333 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:32:27.330502 kubelet[2333]: I0909 00:32:27.329978 2333 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:32:27.330502 kubelet[2333]: E0909 00:32:27.330139 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="200ms" Sep 9 00:32:27.331362 kubelet[2333]: E0909 00:32:27.328543 2333 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186375ee9e300f76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:32:27.321241462 +0000 UTC m=+0.312965065,LastTimestamp:2025-09-09 00:32:27.321241462 +0000 UTC m=+0.312965065,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:32:27.331509 kubelet[2333]: I0909 00:32:27.331426 2333 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:32:27.331558 kubelet[2333]: I0909 00:32:27.331537 2333 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:32:27.331871 kubelet[2333]: W0909 00:32:27.331830 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:27.331940 kubelet[2333]: E0909 00:32:27.331893 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:27.332608 kubelet[2333]: E0909 00:32:27.332559 2333 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:32:27.332906 kubelet[2333]: I0909 00:32:27.332890 2333 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:32:27.346773 kubelet[2333]: I0909 00:32:27.346731 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:32:27.348010 kubelet[2333]: I0909 00:32:27.347986 2333 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:32:27.348066 kubelet[2333]: I0909 00:32:27.348013 2333 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:32:27.348066 kubelet[2333]: I0909 00:32:27.348037 2333 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:32:27.348127 kubelet[2333]: E0909 00:32:27.348091 2333 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:32:27.349381 kubelet[2333]: W0909 00:32:27.349236 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:27.349381 kubelet[2333]: E0909 00:32:27.349297 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:27.363043 kubelet[2333]: I0909 00:32:27.363019 2333 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:32:27.363043 kubelet[2333]: I0909 00:32:27.363040 2333 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:32:27.363135 kubelet[2333]: I0909 00:32:27.363063 2333 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:32:27.429836 kubelet[2333]: E0909 00:32:27.429790 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:27.449250 kubelet[2333]: E0909 00:32:27.449200 2333 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:32:27.530529 kubelet[2333]: E0909 00:32:27.530403 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:27.531193 kubelet[2333]: E0909 00:32:27.531159 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="400ms" Sep 9 00:32:27.631787 kubelet[2333]: E0909 00:32:27.631737 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:27.649956 kubelet[2333]: E0909 00:32:27.649914 2333 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:32:27.732436 kubelet[2333]: E0909 00:32:27.732369 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:27.832810 kubelet[2333]: E0909 00:32:27.832659 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:27.932719 kubelet[2333]: E0909 00:32:27.932654 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="800ms" Sep 9 00:32:27.932719 kubelet[2333]: E0909 00:32:27.932716 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:28.033444 kubelet[2333]: E0909 00:32:28.033371 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:28.050615 kubelet[2333]: E0909 00:32:28.050552 2333 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:32:28.134258 kubelet[2333]: E0909 00:32:28.134097 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:28.234877 kubelet[2333]: E0909 00:32:28.234799 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:28.278441 kubelet[2333]: W0909 00:32:28.278355 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:28.278544 kubelet[2333]: E0909 00:32:28.278441 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:28.334932 kubelet[2333]: E0909 00:32:28.334884 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:28.435715 kubelet[2333]: E0909 00:32:28.435560 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:28.506579 kubelet[2333]: W0909 00:32:28.506484 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:28.506579 kubelet[2333]: E0909 00:32:28.506580 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:28.536346 kubelet[2333]: E0909 00:32:28.536269 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:28.636994 kubelet[2333]: E0909 00:32:28.636904 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:28.681756 kubelet[2333]: W0909 00:32:28.681688 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:28.681756 kubelet[2333]: E0909 00:32:28.681753 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:28.733849 kubelet[2333]: E0909 00:32:28.733794 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="1.6s" Sep 9 00:32:28.737962 kubelet[2333]: E0909 00:32:28.737917 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:28.838309 kubelet[2333]: E0909 00:32:28.838229 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:28.851513 kubelet[2333]: E0909 00:32:28.851451 2333 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 00:32:28.856188 kubelet[2333]: W0909 00:32:28.856115 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:28.856188 kubelet[2333]: E0909 00:32:28.856184 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:28.867851 kubelet[2333]: I0909 00:32:28.867804 2333 policy_none.go:49] "None policy: Start" Sep 9 00:32:28.868708 kubelet[2333]: I0909 00:32:28.868670 2333 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:32:28.868708 kubelet[2333]: I0909 00:32:28.868704 2333 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:32:28.876003 kubelet[2333]: I0909 00:32:28.875969 2333 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:32:28.876259 kubelet[2333]: I0909 00:32:28.876234 2333 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:32:28.876302 kubelet[2333]: I0909 00:32:28.876255 2333 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:32:28.877056 kubelet[2333]: I0909 00:32:28.877040 2333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:32:28.878139 kubelet[2333]: E0909 00:32:28.878120 2333 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:32:28.978530 kubelet[2333]: I0909 00:32:28.978471 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:32:28.979040 kubelet[2333]: E0909 00:32:28.978983 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 9 00:32:29.180923 kubelet[2333]: I0909 00:32:29.180796 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:32:29.181379 kubelet[2333]: E0909 00:32:29.181222 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 9 00:32:29.413983 kubelet[2333]: E0909 00:32:29.413902 2333 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:29.582803 kubelet[2333]: I0909 00:32:29.582748 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:32:29.583173 kubelet[2333]: E0909 00:32:29.583127 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 9 00:32:30.334605 kubelet[2333]: E0909 00:32:30.334533 2333 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.108:6443: connect: connection refused" interval="3.2s" Sep 9 00:32:30.385507 kubelet[2333]: I0909 00:32:30.385458 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:32:30.385923 kubelet[2333]: E0909 00:32:30.385874 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 9 00:32:30.548658 kubelet[2333]: I0909 00:32:30.548582 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:30.548658 kubelet[2333]: I0909 00:32:30.548652 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:30.548658 kubelet[2333]: I0909 00:32:30.548680 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:30.548937 kubelet[2333]: I0909 00:32:30.548706 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa2fe977c3b466cb2ab2ff95b2c1a9ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa2fe977c3b466cb2ab2ff95b2c1a9ed\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:30.548937 kubelet[2333]: I0909 00:32:30.548728 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa2fe977c3b466cb2ab2ff95b2c1a9ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa2fe977c3b466cb2ab2ff95b2c1a9ed\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:30.548937 kubelet[2333]: I0909 00:32:30.548752 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:30.548937 kubelet[2333]: I0909 00:32:30.548771 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:30.548937 kubelet[2333]: I0909 00:32:30.548791 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:32:30.549068 kubelet[2333]: I0909 00:32:30.548810 2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa2fe977c3b466cb2ab2ff95b2c1a9ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa2fe977c3b466cb2ab2ff95b2c1a9ed\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:30.549068 kubelet[2333]: W0909 00:32:30.548874 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:30.549068 kubelet[2333]: E0909 00:32:30.548925 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:30.756551 kubelet[2333]: W0909 00:32:30.756484 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:30.756551 kubelet[2333]: E0909 00:32:30.756550 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:30.758965 kubelet[2333]: E0909 00:32:30.758928 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:30.759818 containerd[1592]: time="2025-09-09T00:32:30.759757235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 00:32:30.761077 kubelet[2333]: E0909 00:32:30.761029 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:30.761329 kubelet[2333]: E0909 00:32:30.761037 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:30.761714 containerd[1592]: time="2025-09-09T00:32:30.761660495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 00:32:30.762191 containerd[1592]: time="2025-09-09T00:32:30.762159113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa2fe977c3b466cb2ab2ff95b2c1a9ed,Namespace:kube-system,Attempt:0,}" Sep 9 00:32:30.824695 kubelet[2333]: W0909 00:32:30.824648 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:30.824799 kubelet[2333]: E0909 00:32:30.824704 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:31.568545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850651099.mount: Deactivated successfully. Sep 9 00:32:31.755443 kubelet[2333]: W0909 00:32:31.755319 2333 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.108:6443: connect: connection refused Sep 9 00:32:31.755969 kubelet[2333]: E0909 00:32:31.755482 2333 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.108:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:32:31.811119 containerd[1592]: time="2025-09-09T00:32:31.811037260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:32:31.812379 containerd[1592]: time="2025-09-09T00:32:31.812321915Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:32:31.813481 containerd[1592]: time="2025-09-09T00:32:31.813449049Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:32:31.814442 containerd[1592]: time="2025-09-09T00:32:31.814374806Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:32:31.815611 containerd[1592]: time="2025-09-09T00:32:31.815574616Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:32:31.816662 containerd[1592]: time="2025-09-09T00:32:31.816634614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 9 00:32:31.817797 containerd[1592]: time="2025-09-09T00:32:31.817764113Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:32:31.823078 containerd[1592]: time="2025-09-09T00:32:31.822966972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:32:31.824147 containerd[1592]: time="2025-09-09T00:32:31.824101782Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.062345317s" Sep 9 00:32:31.825030 containerd[1592]: time="2025-09-09T00:32:31.824989787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.065119996s" Sep 9 00:32:31.827450 containerd[1592]: time="2025-09-09T00:32:31.827381983Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.065171709s" Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.977970387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.978025096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.978036240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.978124471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.978191607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.978251406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.978263783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.978371607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.979579706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.979623461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.979634224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:31.980449 containerd[1592]: time="2025-09-09T00:32:31.979796185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:31.987399 kubelet[2333]: I0909 00:32:31.987359 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:32:31.987924 kubelet[2333]: E0909 00:32:31.987882 2333 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.108:6443/api/v1/nodes\": dial tcp 10.0.0.108:6443: connect: connection refused" node="localhost" Sep 9 00:32:32.047095 containerd[1592]: time="2025-09-09T00:32:32.046752664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23a989282e00025b6e3eb36023e8422a854617b64f8a8e9bc3e7047f42f7a41\"" Sep 9 00:32:32.048597 containerd[1592]: time="2025-09-09T00:32:32.048562116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:aa2fe977c3b466cb2ab2ff95b2c1a9ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc2697af4a91bca0b5a18147b24474e3b481beb8fe78a14f5639ee7a918feb0d\"" Sep 9 00:32:32.048746 kubelet[2333]: E0909 00:32:32.048715 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:32.048949 containerd[1592]: time="2025-09-09T00:32:32.048928205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f1a5f1f2440771a21abe15d34c77e07b6606a8fd95ecc7a9b43ddf1c0d3c36e\"" Sep 9 00:32:32.050433 kubelet[2333]: E0909 00:32:32.050394 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:32.050994 kubelet[2333]: E0909 00:32:32.050594 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:32.052477 containerd[1592]: time="2025-09-09T00:32:32.052439616Z" level=info msg="CreateContainer within sandbox \"a23a989282e00025b6e3eb36023e8422a854617b64f8a8e9bc3e7047f42f7a41\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:32:32.054849 containerd[1592]: time="2025-09-09T00:32:32.054781715Z" level=info msg="CreateContainer within sandbox \"9f1a5f1f2440771a21abe15d34c77e07b6606a8fd95ecc7a9b43ddf1c0d3c36e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:32:32.055048 containerd[1592]: time="2025-09-09T00:32:32.055017554Z" level=info msg="CreateContainer within sandbox \"dc2697af4a91bca0b5a18147b24474e3b481beb8fe78a14f5639ee7a918feb0d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:32:32.075137 containerd[1592]: time="2025-09-09T00:32:32.074984434Z" level=info msg="CreateContainer within sandbox \"a23a989282e00025b6e3eb36023e8422a854617b64f8a8e9bc3e7047f42f7a41\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c80420d5cfe547a4ff6a5b6fdf40a5f6f20703334956950059f32e3fdb4e187f\"" Sep 9 00:32:32.075808 containerd[1592]: time="2025-09-09T00:32:32.075757100Z" level=info msg="StartContainer for \"c80420d5cfe547a4ff6a5b6fdf40a5f6f20703334956950059f32e3fdb4e187f\"" Sep 9 00:32:32.084668 containerd[1592]: time="2025-09-09T00:32:32.084616953Z" level=info msg="CreateContainer within sandbox \"9f1a5f1f2440771a21abe15d34c77e07b6606a8fd95ecc7a9b43ddf1c0d3c36e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e9c4f5b7cfa20ea011c9e4450572f40033e2c54d4a102086c7ebe59f9a1654e8\"" Sep 9 00:32:32.085315 containerd[1592]: time="2025-09-09T00:32:32.085270141Z" level=info msg="StartContainer for \"e9c4f5b7cfa20ea011c9e4450572f40033e2c54d4a102086c7ebe59f9a1654e8\"" Sep 9 00:32:32.089043 containerd[1592]: time="2025-09-09T00:32:32.088943461Z" level=info msg="CreateContainer within sandbox \"dc2697af4a91bca0b5a18147b24474e3b481beb8fe78a14f5639ee7a918feb0d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c8d8d000445ad6f40b9e807ac64b4711267697f17bab1ce930f60212f47e5430\"" Sep 9 00:32:32.090265 containerd[1592]: time="2025-09-09T00:32:32.090242302Z" level=info msg="StartContainer for \"c8d8d000445ad6f40b9e807ac64b4711267697f17bab1ce930f60212f47e5430\"" Sep 9 00:32:32.338012 containerd[1592]: time="2025-09-09T00:32:32.337879189Z" level=info msg="StartContainer for \"c8d8d000445ad6f40b9e807ac64b4711267697f17bab1ce930f60212f47e5430\" returns successfully" Sep 9 00:32:32.338012 containerd[1592]: time="2025-09-09T00:32:32.337923465Z" level=info msg="StartContainer for \"c80420d5cfe547a4ff6a5b6fdf40a5f6f20703334956950059f32e3fdb4e187f\" returns successfully" Sep 9 00:32:32.338139 containerd[1592]: time="2025-09-09T00:32:32.338043655Z" level=info msg="StartContainer for \"e9c4f5b7cfa20ea011c9e4450572f40033e2c54d4a102086c7ebe59f9a1654e8\" returns successfully" Sep 9 00:32:32.362145 kubelet[2333]: E0909 00:32:32.362104 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:32.365570 kubelet[2333]: E0909 00:32:32.365523 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:32.368697 kubelet[2333]: E0909 00:32:32.368665 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:33.371391 kubelet[2333]: E0909 00:32:33.371325 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:33.885435 kubelet[2333]: E0909 00:32:33.884817 2333 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:32:35.189873 kubelet[2333]: I0909 00:32:35.189832 2333 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:32:35.382256 kubelet[2333]: I0909 00:32:35.382203 2333 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:32:35.382760 kubelet[2333]: E0909 00:32:35.382435 2333 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:32:35.942270 kubelet[2333]: E0909 00:32:35.942233 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:36.084201 kubelet[2333]: E0909 00:32:36.084151 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:36.185020 kubelet[2333]: E0909 00:32:36.184973 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:36.286281 kubelet[2333]: E0909 00:32:36.286136 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:36.386306 kubelet[2333]: E0909 00:32:36.386246 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:36.486753 kubelet[2333]: E0909 00:32:36.486706 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:36.587341 kubelet[2333]: E0909 00:32:36.587197 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:36.687979 kubelet[2333]: E0909 00:32:36.687905 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:36.788724 kubelet[2333]: E0909 00:32:36.788653 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:36.889198 kubelet[2333]: E0909 00:32:36.889019 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:36.962287 kubelet[2333]: E0909 00:32:36.962253 2333 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:36.989914 kubelet[2333]: E0909 00:32:36.989873 2333 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:32:37.320648 kubelet[2333]: I0909 00:32:37.320602 2333 apiserver.go:52] "Watching apiserver" Sep 9 00:32:37.330377 kubelet[2333]: I0909 00:32:37.330347 2333 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:32:37.848025 systemd[1]: Reloading requested from client PID 2609 ('systemctl') (unit session-7.scope)... Sep 9 00:32:37.848047 systemd[1]: Reloading... Sep 9 00:32:37.922442 zram_generator::config[2651]: No configuration found. Sep 9 00:32:38.066488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:32:38.158244 systemd[1]: Reloading finished in 309 ms. Sep 9 00:32:38.195994 kubelet[2333]: I0909 00:32:38.195964 2333 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:32:38.196002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:38.209110 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:32:38.209741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:38.218611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:32:38.402771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:32:38.409453 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:32:38.449197 kubelet[2703]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:32:38.449197 kubelet[2703]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:32:38.449197 kubelet[2703]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:32:38.449554 kubelet[2703]: I0909 00:32:38.449314 2703 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:32:38.462075 kubelet[2703]: I0909 00:32:38.461657 2703 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:32:38.462075 kubelet[2703]: I0909 00:32:38.461695 2703 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:32:38.462165 kubelet[2703]: I0909 00:32:38.462081 2703 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:32:38.463529 kubelet[2703]: I0909 00:32:38.463509 2703 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:32:38.466219 kubelet[2703]: I0909 00:32:38.466135 2703 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:32:38.473180 kubelet[2703]: E0909 00:32:38.473119 2703 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:32:38.473180 kubelet[2703]: I0909 00:32:38.473169 2703 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:32:38.482392 kubelet[2703]: I0909 00:32:38.481977 2703 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:32:38.482993 kubelet[2703]: I0909 00:32:38.482946 2703 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:32:38.483180 kubelet[2703]: I0909 00:32:38.483078 2703 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:32:38.483308 kubelet[2703]: I0909 00:32:38.483114 2703 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 9 00:32:38.483462 kubelet[2703]: I0909 00:32:38.483318 2703 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:32:38.483462 kubelet[2703]: I0909 00:32:38.483329 2703 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:32:38.483462 kubelet[2703]: I0909 00:32:38.483361 2703 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:32:38.483572 kubelet[2703]: I0909 00:32:38.483491 2703 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:32:38.483572 kubelet[2703]: I0909 00:32:38.483504 2703 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:32:38.483572 kubelet[2703]: I0909 00:32:38.483536 2703 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:32:38.483572 kubelet[2703]: I0909 00:32:38.483548 2703 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:32:38.485156 kubelet[2703]: I0909 00:32:38.485123 2703 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:32:38.485949 kubelet[2703]: I0909 00:32:38.485894 2703 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:32:38.488429 kubelet[2703]: I0909 00:32:38.486441 2703 server.go:1274] "Started kubelet" Sep 9 00:32:38.489355 kubelet[2703]: I0909 00:32:38.489330 2703 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:32:38.494885 kubelet[2703]: E0909 00:32:38.494857 2703 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:32:38.496605 kubelet[2703]: I0909 00:32:38.496043 2703 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:32:38.496605 kubelet[2703]: I0909 00:32:38.496239 2703 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:32:38.496605 kubelet[2703]: I0909 00:32:38.496497 2703 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:32:38.497973 kubelet[2703]: I0909 00:32:38.497916 2703 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:32:38.500809 kubelet[2703]: I0909 00:32:38.500786 2703 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:32:38.502222 kubelet[2703]: I0909 00:32:38.501036 2703 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:32:38.504363 kubelet[2703]: I0909 00:32:38.504310 2703 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:32:38.506257 kubelet[2703]: I0909 00:32:38.501064 2703 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:32:38.506544 kubelet[2703]: I0909 00:32:38.506518 2703 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:32:38.515999 kubelet[2703]: I0909 00:32:38.515963 2703 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:32:38.515999 kubelet[2703]: I0909 00:32:38.515987 2703 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:32:38.518759 kubelet[2703]: I0909 00:32:38.518719 2703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:32:38.521638 kubelet[2703]: I0909 00:32:38.521593 2703 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:32:38.521638 kubelet[2703]: I0909 00:32:38.521625 2703 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:32:38.521730 kubelet[2703]: I0909 00:32:38.521651 2703 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:32:38.521730 kubelet[2703]: E0909 00:32:38.521711 2703 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:32:38.572001 kubelet[2703]: I0909 00:32:38.571971 2703 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:32:38.572001 kubelet[2703]: I0909 00:32:38.571988 2703 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:32:38.572001 kubelet[2703]: I0909 00:32:38.572012 2703 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:32:38.572178 kubelet[2703]: I0909 00:32:38.572159 2703 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:32:38.572206 kubelet[2703]: I0909 00:32:38.572175 2703 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:32:38.572206 kubelet[2703]: I0909 00:32:38.572194 2703 policy_none.go:49] "None policy: Start" Sep 9 00:32:38.572780 kubelet[2703]: I0909 00:32:38.572760 2703 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:32:38.572780 kubelet[2703]: I0909 00:32:38.572780 2703 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:32:38.572902 kubelet[2703]: I0909 00:32:38.572886 2703 state_mem.go:75] "Updated machine memory state" Sep 9 00:32:38.574391 kubelet[2703]: I0909 00:32:38.574359 2703 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:32:38.574574 kubelet[2703]: I0909 00:32:38.574548 2703 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:32:38.574602 kubelet[2703]: I0909 00:32:38.574564 2703 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:32:38.575798 kubelet[2703]: I0909 00:32:38.575735 2703 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:32:38.684138 kubelet[2703]: I0909 00:32:38.683803 2703 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:32:38.690289 kubelet[2703]: I0909 00:32:38.690246 2703 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 00:32:38.690446 kubelet[2703]: I0909 00:32:38.690338 2703 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:32:38.696964 kubelet[2703]: I0909 00:32:38.696922 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa2fe977c3b466cb2ab2ff95b2c1a9ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa2fe977c3b466cb2ab2ff95b2c1a9ed\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:38.697073 kubelet[2703]: I0909 00:32:38.696970 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa2fe977c3b466cb2ab2ff95b2c1a9ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"aa2fe977c3b466cb2ab2ff95b2c1a9ed\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:38.697073 kubelet[2703]: I0909 00:32:38.696995 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:32:38.697073 kubelet[2703]: I0909 00:32:38.697019 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa2fe977c3b466cb2ab2ff95b2c1a9ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"aa2fe977c3b466cb2ab2ff95b2c1a9ed\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:38.697073 kubelet[2703]: I0909 00:32:38.697049 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:38.697073 kubelet[2703]: I0909 00:32:38.697071 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:38.697206 kubelet[2703]: I0909 00:32:38.697131 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:38.697206 kubelet[2703]: I0909 00:32:38.697155 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:38.697206 kubelet[2703]: I0909 00:32:38.697176 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:38.932284 kubelet[2703]: E0909 00:32:38.931945 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:38.932284 kubelet[2703]: E0909 00:32:38.932161 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:38.932889 kubelet[2703]: E0909 00:32:38.932869 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:39.484298 kubelet[2703]: I0909 00:32:39.484240 2703 apiserver.go:52] "Watching apiserver" Sep 9 00:32:39.496898 kubelet[2703]: I0909 00:32:39.496836 2703 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:32:39.535908 kubelet[2703]: E0909 00:32:39.535788 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:39.884809 kubelet[2703]: E0909 00:32:39.884177 2703 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:32:39.884809 kubelet[2703]: E0909 00:32:39.884401 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:39.885643 kubelet[2703]: E0909 00:32:39.885617 2703 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:32:39.885820 kubelet[2703]: E0909 00:32:39.885804 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:40.213121 kubelet[2703]: I0909 00:32:40.212914 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.212869389 podStartE2EDuration="2.212869389s" podCreationTimestamp="2025-09-09 00:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:32:39.885773215 +0000 UTC m=+1.471156918" watchObservedRunningTime="2025-09-09 00:32:40.212869389 +0000 UTC m=+1.798253092" Sep 9 00:32:40.506811 kubelet[2703]: I0909 00:32:40.506599 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.506566433 podStartE2EDuration="2.506566433s" podCreationTimestamp="2025-09-09 00:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:32:40.213350672 +0000 UTC m=+1.798734386" watchObservedRunningTime="2025-09-09 00:32:40.506566433 +0000 UTC m=+2.091950136" Sep 9 00:32:40.537714 kubelet[2703]: E0909 00:32:40.537656 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:40.538131 kubelet[2703]: E0909 00:32:40.538101 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:40.557602 kubelet[2703]: I0909 00:32:40.557495 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.557475414 podStartE2EDuration="2.557475414s" podCreationTimestamp="2025-09-09 00:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:32:40.507541446 +0000 UTC m=+2.092925159" watchObservedRunningTime="2025-09-09 00:32:40.557475414 +0000 UTC m=+2.142859117" Sep 9 00:32:41.673582 kubelet[2703]: I0909 00:32:41.673542 2703 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:32:41.674070 containerd[1592]: time="2025-09-09T00:32:41.674009166Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:32:41.674348 kubelet[2703]: I0909 00:32:41.674236 2703 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:32:42.416345 kubelet[2703]: I0909 00:32:42.416281 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/911d4c6a-f393-4b24-bf4a-516b6a64ceda-xtables-lock\") pod \"kube-proxy-mdqhh\" (UID: \"911d4c6a-f393-4b24-bf4a-516b6a64ceda\") " pod="kube-system/kube-proxy-mdqhh" Sep 9 00:32:42.416345 kubelet[2703]: I0909 00:32:42.416332 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qblbj\" (UniqueName: \"kubernetes.io/projected/911d4c6a-f393-4b24-bf4a-516b6a64ceda-kube-api-access-qblbj\") pod \"kube-proxy-mdqhh\" (UID: \"911d4c6a-f393-4b24-bf4a-516b6a64ceda\") " pod="kube-system/kube-proxy-mdqhh" Sep 9 00:32:42.416345 kubelet[2703]: I0909 00:32:42.416358 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/911d4c6a-f393-4b24-bf4a-516b6a64ceda-kube-proxy\") pod \"kube-proxy-mdqhh\" (UID: \"911d4c6a-f393-4b24-bf4a-516b6a64ceda\") " pod="kube-system/kube-proxy-mdqhh" Sep 9 00:32:42.416547 kubelet[2703]: I0909 00:32:42.416375 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/911d4c6a-f393-4b24-bf4a-516b6a64ceda-lib-modules\") pod \"kube-proxy-mdqhh\" (UID: \"911d4c6a-f393-4b24-bf4a-516b6a64ceda\") " pod="kube-system/kube-proxy-mdqhh" Sep 9 00:32:42.711837 kubelet[2703]: E0909 00:32:42.711802 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:42.712363 containerd[1592]: time="2025-09-09T00:32:42.712319716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdqhh,Uid:911d4c6a-f393-4b24-bf4a-516b6a64ceda,Namespace:kube-system,Attempt:0,}" Sep 9 00:32:43.047300 containerd[1592]: time="2025-09-09T00:32:43.046725000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:43.047300 containerd[1592]: time="2025-09-09T00:32:43.046849968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:43.047300 containerd[1592]: time="2025-09-09T00:32:43.046875852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:43.047300 containerd[1592]: time="2025-09-09T00:32:43.047023266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:43.106849 containerd[1592]: time="2025-09-09T00:32:43.106806581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdqhh,Uid:911d4c6a-f393-4b24-bf4a-516b6a64ceda,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bc89ed661d22e217d67b5d998a6c87f457e8a70233e97ca9e34e77890517777\"" Sep 9 00:32:43.108001 kubelet[2703]: E0909 00:32:43.107961 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:43.110796 containerd[1592]: time="2025-09-09T00:32:43.110746984Z" level=info msg="CreateContainer within sandbox \"0bc89ed661d22e217d67b5d998a6c87f457e8a70233e97ca9e34e77890517777\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:32:43.122187 kubelet[2703]: I0909 00:32:43.122148 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcscg\" (UniqueName: \"kubernetes.io/projected/12f44dcc-03bf-4360-84f9-792f8a76d9a9-kube-api-access-fcscg\") pod \"tigera-operator-58fc44c59b-9frjt\" (UID: \"12f44dcc-03bf-4360-84f9-792f8a76d9a9\") " pod="tigera-operator/tigera-operator-58fc44c59b-9frjt" Sep 9 00:32:43.122452 kubelet[2703]: I0909 00:32:43.122393 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/12f44dcc-03bf-4360-84f9-792f8a76d9a9-var-lib-calico\") pod \"tigera-operator-58fc44c59b-9frjt\" (UID: \"12f44dcc-03bf-4360-84f9-792f8a76d9a9\") " pod="tigera-operator/tigera-operator-58fc44c59b-9frjt" Sep 9 00:32:43.130755 containerd[1592]: time="2025-09-09T00:32:43.130686356Z" level=info msg="CreateContainer within sandbox \"0bc89ed661d22e217d67b5d998a6c87f457e8a70233e97ca9e34e77890517777\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8db78340c8016713d5f3916cfe9147870a5010218f2dcbc0540d74b068d93d4\"" Sep 9 00:32:43.132136 containerd[1592]: time="2025-09-09T00:32:43.132079685Z" level=info msg="StartContainer for \"f8db78340c8016713d5f3916cfe9147870a5010218f2dcbc0540d74b068d93d4\"" Sep 9 00:32:43.415341 containerd[1592]: time="2025-09-09T00:32:43.415190308Z" level=info msg="StartContainer for \"f8db78340c8016713d5f3916cfe9147870a5010218f2dcbc0540d74b068d93d4\" returns successfully" Sep 9 00:32:43.544570 kubelet[2703]: E0909 00:32:43.544519 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:43.664072 containerd[1592]: time="2025-09-09T00:32:43.664022569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-9frjt,Uid:12f44dcc-03bf-4360-84f9-792f8a76d9a9,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:32:43.693810 containerd[1592]: time="2025-09-09T00:32:43.692936156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:43.693978 containerd[1592]: time="2025-09-09T00:32:43.693794409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:43.693978 containerd[1592]: time="2025-09-09T00:32:43.693848602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:43.694090 containerd[1592]: time="2025-09-09T00:32:43.694031320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:43.757306 containerd[1592]: time="2025-09-09T00:32:43.757257298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-9frjt,Uid:12f44dcc-03bf-4360-84f9-792f8a76d9a9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d02b5eabe6d59deecacf2667b9f5cb39b6677720d0056c314feef6ccdd025913\"" Sep 9 00:32:43.760294 containerd[1592]: time="2025-09-09T00:32:43.760191041Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:32:45.589337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586010777.mount: Deactivated successfully. Sep 9 00:32:46.350151 containerd[1592]: time="2025-09-09T00:32:46.350093466Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:46.351114 containerd[1592]: time="2025-09-09T00:32:46.351035137Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:32:46.352336 containerd[1592]: time="2025-09-09T00:32:46.352274359Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:46.354727 containerd[1592]: time="2025-09-09T00:32:46.354685485Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:46.355245 containerd[1592]: time="2025-09-09T00:32:46.355212265Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.594967865s" Sep 9 00:32:46.355299 containerd[1592]: time="2025-09-09T00:32:46.355245052Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:32:46.357550 containerd[1592]: time="2025-09-09T00:32:46.357514828Z" level=info msg="CreateContainer within sandbox \"d02b5eabe6d59deecacf2667b9f5cb39b6677720d0056c314feef6ccdd025913\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:32:46.370855 containerd[1592]: time="2025-09-09T00:32:46.370710583Z" level=info msg="CreateContainer within sandbox \"d02b5eabe6d59deecacf2667b9f5cb39b6677720d0056c314feef6ccdd025913\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"be0e968daf35c62619a7dcb825c3b0a72b42b76076438784b6d6c45485b583ea\"" Sep 9 00:32:46.371326 containerd[1592]: time="2025-09-09T00:32:46.371287818Z" level=info msg="StartContainer for \"be0e968daf35c62619a7dcb825c3b0a72b42b76076438784b6d6c45485b583ea\"" Sep 9 00:32:46.431758 containerd[1592]: time="2025-09-09T00:32:46.431706332Z" level=info msg="StartContainer for \"be0e968daf35c62619a7dcb825c3b0a72b42b76076438784b6d6c45485b583ea\" returns successfully" Sep 9 00:32:46.467080 kubelet[2703]: E0909 00:32:46.467037 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:46.484159 kubelet[2703]: I0909 00:32:46.483733 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mdqhh" podStartSLOduration=4.483707602 podStartE2EDuration="4.483707602s" podCreationTimestamp="2025-09-09 00:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:32:43.557126683 +0000 UTC m=+5.142510646" watchObservedRunningTime="2025-09-09 00:32:46.483707602 +0000 UTC m=+8.069091305" Sep 9 00:32:46.553136 kubelet[2703]: E0909 00:32:46.553088 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:46.568925 kubelet[2703]: I0909 00:32:46.568852 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-9frjt" podStartSLOduration=1.972049609 podStartE2EDuration="4.568831649s" podCreationTimestamp="2025-09-09 00:32:42 +0000 UTC" firstStartedPulling="2025-09-09 00:32:43.759343479 +0000 UTC m=+5.344727182" lastFinishedPulling="2025-09-09 00:32:46.356125519 +0000 UTC m=+7.941509222" observedRunningTime="2025-09-09 00:32:46.568793402 +0000 UTC m=+8.154177105" watchObservedRunningTime="2025-09-09 00:32:46.568831649 +0000 UTC m=+8.154215363" Sep 9 00:32:47.483407 kubelet[2703]: E0909 00:32:47.483349 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:47.557404 kubelet[2703]: E0909 00:32:47.557361 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:47.781615 kubelet[2703]: E0909 00:32:47.781476 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:48.561492 kubelet[2703]: E0909 00:32:48.558998 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:52.414069 sudo[1778]: pam_unix(sudo:session): session closed for user root Sep 9 00:32:52.421120 sshd[1771]: pam_unix(sshd:session): session closed for user core Sep 9 00:32:52.429286 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:32:52.433214 systemd[1]: sshd@6-10.0.0.108:22-10.0.0.1:58052.service: Deactivated successfully. Sep 9 00:32:52.446882 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:32:52.449374 systemd-logind[1548]: Removed session 7. Sep 9 00:32:54.990150 kubelet[2703]: I0909 00:32:54.990088 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6440fec9-3201-41ba-823d-c50a0f091371-tigera-ca-bundle\") pod \"calico-typha-b579468df-p4jfg\" (UID: \"6440fec9-3201-41ba-823d-c50a0f091371\") " pod="calico-system/calico-typha-b579468df-p4jfg" Sep 9 00:32:54.990150 kubelet[2703]: I0909 00:32:54.990135 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6440fec9-3201-41ba-823d-c50a0f091371-typha-certs\") pod \"calico-typha-b579468df-p4jfg\" (UID: \"6440fec9-3201-41ba-823d-c50a0f091371\") " pod="calico-system/calico-typha-b579468df-p4jfg" Sep 9 00:32:54.990150 kubelet[2703]: I0909 00:32:54.990157 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58rcm\" (UniqueName: \"kubernetes.io/projected/6440fec9-3201-41ba-823d-c50a0f091371-kube-api-access-58rcm\") pod \"calico-typha-b579468df-p4jfg\" (UID: \"6440fec9-3201-41ba-823d-c50a0f091371\") " pod="calico-system/calico-typha-b579468df-p4jfg" Sep 9 00:32:55.156805 kubelet[2703]: E0909 00:32:55.156744 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:55.157578 containerd[1592]: time="2025-09-09T00:32:55.157527032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b579468df-p4jfg,Uid:6440fec9-3201-41ba-823d-c50a0f091371,Namespace:calico-system,Attempt:0,}" Sep 9 00:32:55.255931 containerd[1592]: time="2025-09-09T00:32:55.255106773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:55.255931 containerd[1592]: time="2025-09-09T00:32:55.255195693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:55.255931 containerd[1592]: time="2025-09-09T00:32:55.255210172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:55.255931 containerd[1592]: time="2025-09-09T00:32:55.255324112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:55.316568 containerd[1592]: time="2025-09-09T00:32:55.316492538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b579468df-p4jfg,Uid:6440fec9-3201-41ba-823d-c50a0f091371,Namespace:calico-system,Attempt:0,} returns sandbox id \"69f23a750d2d5b46f026f693b83599a7a26c94013a2c9bcf690f373b8fa4f281\"" Sep 9 00:32:55.317934 kubelet[2703]: E0909 00:32:55.317256 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:55.318984 containerd[1592]: time="2025-09-09T00:32:55.318872674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:32:55.392675 kubelet[2703]: I0909 00:32:55.392523 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwkb\" (UniqueName: \"kubernetes.io/projected/292ed936-3945-4eed-8b7f-ae76b52ea327-kube-api-access-dpwkb\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.392675 kubelet[2703]: I0909 00:32:55.392591 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/292ed936-3945-4eed-8b7f-ae76b52ea327-cni-bin-dir\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.392675 kubelet[2703]: I0909 00:32:55.392620 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/292ed936-3945-4eed-8b7f-ae76b52ea327-var-run-calico\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.392675 kubelet[2703]: I0909 00:32:55.392644 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/292ed936-3945-4eed-8b7f-ae76b52ea327-var-lib-calico\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.392675 kubelet[2703]: I0909 00:32:55.392674 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/292ed936-3945-4eed-8b7f-ae76b52ea327-cni-log-dir\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.393043 kubelet[2703]: I0909 00:32:55.392699 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/292ed936-3945-4eed-8b7f-ae76b52ea327-node-certs\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.393043 kubelet[2703]: I0909 00:32:55.392722 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/292ed936-3945-4eed-8b7f-ae76b52ea327-cni-net-dir\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.393043 kubelet[2703]: I0909 00:32:55.392745 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/292ed936-3945-4eed-8b7f-ae76b52ea327-xtables-lock\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.393043 kubelet[2703]: I0909 00:32:55.392767 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/292ed936-3945-4eed-8b7f-ae76b52ea327-flexvol-driver-host\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.393043 kubelet[2703]: I0909 00:32:55.392813 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/292ed936-3945-4eed-8b7f-ae76b52ea327-policysync\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.393222 kubelet[2703]: I0909 00:32:55.392845 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/292ed936-3945-4eed-8b7f-ae76b52ea327-tigera-ca-bundle\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.393222 kubelet[2703]: I0909 00:32:55.392860 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/292ed936-3945-4eed-8b7f-ae76b52ea327-lib-modules\") pod \"calico-node-2pdlc\" (UID: \"292ed936-3945-4eed-8b7f-ae76b52ea327\") " pod="calico-system/calico-node-2pdlc" Sep 9 00:32:55.481267 kubelet[2703]: E0909 00:32:55.481205 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvfmz" podUID="45baac1d-c9f0-4704-a887-7b015b292f0b" Sep 9 00:32:55.495259 kubelet[2703]: E0909 00:32:55.495218 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.495259 kubelet[2703]: W0909 00:32:55.495281 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.495485 kubelet[2703]: E0909 00:32:55.495316 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.498173 kubelet[2703]: E0909 00:32:55.498136 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.498173 kubelet[2703]: W0909 00:32:55.498161 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.498279 kubelet[2703]: E0909 00:32:55.498187 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.502710 kubelet[2703]: E0909 00:32:55.502675 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.502710 kubelet[2703]: W0909 00:32:55.502692 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.502710 kubelet[2703]: E0909 00:32:55.502709 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.552765 containerd[1592]: time="2025-09-09T00:32:55.552565448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2pdlc,Uid:292ed936-3945-4eed-8b7f-ae76b52ea327,Namespace:calico-system,Attempt:0,}" Sep 9 00:32:55.581357 containerd[1592]: time="2025-09-09T00:32:55.580533613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:32:55.581357 containerd[1592]: time="2025-09-09T00:32:55.581326003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:32:55.581357 containerd[1592]: time="2025-09-09T00:32:55.581344060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:55.581581 containerd[1592]: time="2025-09-09T00:32:55.581511167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:32:55.594439 kubelet[2703]: E0909 00:32:55.594272 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.594439 kubelet[2703]: W0909 00:32:55.594294 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.594439 kubelet[2703]: E0909 00:32:55.594316 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.594439 kubelet[2703]: I0909 00:32:55.594347 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v75ld\" (UniqueName: \"kubernetes.io/projected/45baac1d-c9f0-4704-a887-7b015b292f0b-kube-api-access-v75ld\") pod \"csi-node-driver-dvfmz\" (UID: \"45baac1d-c9f0-4704-a887-7b015b292f0b\") " pod="calico-system/csi-node-driver-dvfmz" Sep 9 00:32:55.594666 kubelet[2703]: E0909 00:32:55.594615 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.594666 kubelet[2703]: W0909 00:32:55.594654 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.594748 kubelet[2703]: E0909 00:32:55.594697 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.594748 kubelet[2703]: I0909 00:32:55.594737 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/45baac1d-c9f0-4704-a887-7b015b292f0b-varrun\") pod \"csi-node-driver-dvfmz\" (UID: \"45baac1d-c9f0-4704-a887-7b015b292f0b\") " pod="calico-system/csi-node-driver-dvfmz" Sep 9 00:32:55.595003 kubelet[2703]: E0909 00:32:55.594986 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.595003 kubelet[2703]: W0909 00:32:55.595000 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.595061 kubelet[2703]: E0909 00:32:55.595022 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.595061 kubelet[2703]: I0909 00:32:55.595037 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/45baac1d-c9f0-4704-a887-7b015b292f0b-registration-dir\") pod \"csi-node-driver-dvfmz\" (UID: \"45baac1d-c9f0-4704-a887-7b015b292f0b\") " pod="calico-system/csi-node-driver-dvfmz" Sep 9 00:32:55.595255 kubelet[2703]: E0909 00:32:55.595239 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.595255 kubelet[2703]: W0909 00:32:55.595251 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.595313 kubelet[2703]: E0909 00:32:55.595271 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.595313 kubelet[2703]: I0909 00:32:55.595287 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/45baac1d-c9f0-4704-a887-7b015b292f0b-socket-dir\") pod \"csi-node-driver-dvfmz\" (UID: \"45baac1d-c9f0-4704-a887-7b015b292f0b\") " pod="calico-system/csi-node-driver-dvfmz" Sep 9 00:32:55.595932 kubelet[2703]: E0909 00:32:55.595910 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.595932 kubelet[2703]: W0909 00:32:55.595924 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.596028 kubelet[2703]: E0909 00:32:55.595944 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.596028 kubelet[2703]: I0909 00:32:55.595961 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/45baac1d-c9f0-4704-a887-7b015b292f0b-kubelet-dir\") pod \"csi-node-driver-dvfmz\" (UID: \"45baac1d-c9f0-4704-a887-7b015b292f0b\") " pod="calico-system/csi-node-driver-dvfmz" Sep 9 00:32:55.596185 kubelet[2703]: E0909 00:32:55.596168 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.596185 kubelet[2703]: W0909 00:32:55.596181 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.596564 kubelet[2703]: E0909 00:32:55.596546 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.596613 kubelet[2703]: E0909 00:32:55.596608 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.596640 kubelet[2703]: W0909 00:32:55.596615 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.596850 kubelet[2703]: E0909 00:32:55.596830 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.596850 kubelet[2703]: W0909 00:32:55.596844 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.597094 kubelet[2703]: E0909 00:32:55.597074 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.597094 kubelet[2703]: W0909 00:32:55.597086 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.597254 kubelet[2703]: E0909 00:32:55.597238 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.597254 kubelet[2703]: E0909 00:32:55.597250 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.597300 kubelet[2703]: E0909 00:32:55.597264 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.597300 kubelet[2703]: E0909 00:32:55.597293 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.597300 kubelet[2703]: W0909 00:32:55.597301 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.597372 kubelet[2703]: E0909 00:32:55.597330 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.597744 kubelet[2703]: E0909 00:32:55.597561 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.597744 kubelet[2703]: W0909 00:32:55.597573 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.597744 kubelet[2703]: E0909 00:32:55.597593 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.597875 kubelet[2703]: E0909 00:32:55.597857 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.597875 kubelet[2703]: W0909 00:32:55.597869 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.597945 kubelet[2703]: E0909 00:32:55.597880 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.598145 kubelet[2703]: E0909 00:32:55.598128 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.598145 kubelet[2703]: W0909 00:32:55.598141 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.598202 kubelet[2703]: E0909 00:32:55.598151 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.599115 kubelet[2703]: E0909 00:32:55.598646 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.599115 kubelet[2703]: W0909 00:32:55.598662 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.599115 kubelet[2703]: E0909 00:32:55.598675 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.599115 kubelet[2703]: E0909 00:32:55.598970 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.599115 kubelet[2703]: W0909 00:32:55.598978 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.599115 kubelet[2703]: E0909 00:32:55.598986 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.638178 containerd[1592]: time="2025-09-09T00:32:55.638066906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2pdlc,Uid:292ed936-3945-4eed-8b7f-ae76b52ea327,Namespace:calico-system,Attempt:0,} returns sandbox id \"d924295075ba8269e317f5ad09b06d91c282d1d1206a080dd27cf99ff66f9509\"" Sep 9 00:32:55.697401 kubelet[2703]: E0909 00:32:55.697340 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.697401 kubelet[2703]: W0909 00:32:55.697369 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.697401 kubelet[2703]: E0909 00:32:55.697392 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.697773 kubelet[2703]: E0909 00:32:55.697742 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.697773 kubelet[2703]: W0909 00:32:55.697755 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.697773 kubelet[2703]: E0909 00:32:55.697768 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.698011 kubelet[2703]: E0909 00:32:55.697993 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.698011 kubelet[2703]: W0909 00:32:55.698003 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.698094 kubelet[2703]: E0909 00:32:55.698015 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.698284 kubelet[2703]: E0909 00:32:55.698261 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.698284 kubelet[2703]: W0909 00:32:55.698279 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.698371 kubelet[2703]: E0909 00:32:55.698299 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.698560 kubelet[2703]: E0909 00:32:55.698526 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.698560 kubelet[2703]: W0909 00:32:55.698543 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.698560 kubelet[2703]: E0909 00:32:55.698559 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.698762 kubelet[2703]: E0909 00:32:55.698742 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.698762 kubelet[2703]: W0909 00:32:55.698755 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.698848 kubelet[2703]: E0909 00:32:55.698769 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.699074 kubelet[2703]: E0909 00:32:55.699051 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.699074 kubelet[2703]: W0909 00:32:55.699063 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.699074 kubelet[2703]: E0909 00:32:55.699076 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.699368 kubelet[2703]: E0909 00:32:55.699335 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.699400 kubelet[2703]: W0909 00:32:55.699371 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.699456 kubelet[2703]: E0909 00:32:55.699433 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.699803 kubelet[2703]: E0909 00:32:55.699784 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.699803 kubelet[2703]: W0909 00:32:55.699797 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.699966 kubelet[2703]: E0909 00:32:55.699812 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.700052 kubelet[2703]: E0909 00:32:55.700031 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.700052 kubelet[2703]: W0909 00:32:55.700047 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.700127 kubelet[2703]: E0909 00:32:55.700065 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.700283 kubelet[2703]: E0909 00:32:55.700266 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.700283 kubelet[2703]: W0909 00:32:55.700278 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.700358 kubelet[2703]: E0909 00:32:55.700307 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.700571 kubelet[2703]: E0909 00:32:55.700536 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.700571 kubelet[2703]: W0909 00:32:55.700554 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.700670 kubelet[2703]: E0909 00:32:55.700651 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.700762 kubelet[2703]: E0909 00:32:55.700746 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.700762 kubelet[2703]: W0909 00:32:55.700758 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.700838 kubelet[2703]: E0909 00:32:55.700784 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.700982 kubelet[2703]: E0909 00:32:55.700963 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.700982 kubelet[2703]: W0909 00:32:55.700977 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.701050 kubelet[2703]: E0909 00:32:55.701005 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.701274 kubelet[2703]: E0909 00:32:55.701214 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.701274 kubelet[2703]: W0909 00:32:55.701228 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.701600 kubelet[2703]: E0909 00:32:55.701361 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.701600 kubelet[2703]: E0909 00:32:55.701540 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.701600 kubelet[2703]: W0909 00:32:55.701550 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.701600 kubelet[2703]: E0909 00:32:55.701567 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.701994 kubelet[2703]: E0909 00:32:55.701951 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.701994 kubelet[2703]: W0909 00:32:55.701965 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.701994 kubelet[2703]: E0909 00:32:55.701995 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.702355 kubelet[2703]: E0909 00:32:55.702334 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.702355 kubelet[2703]: W0909 00:32:55.702350 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.702470 kubelet[2703]: E0909 00:32:55.702425 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.702743 kubelet[2703]: E0909 00:32:55.702704 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.702743 kubelet[2703]: W0909 00:32:55.702733 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.702743 kubelet[2703]: E0909 00:32:55.702796 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.703077 kubelet[2703]: E0909 00:32:55.702995 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.703077 kubelet[2703]: W0909 00:32:55.703009 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.703077 kubelet[2703]: E0909 00:32:55.703028 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.703319 kubelet[2703]: E0909 00:32:55.703287 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.703319 kubelet[2703]: W0909 00:32:55.703306 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.703467 kubelet[2703]: E0909 00:32:55.703326 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.703630 kubelet[2703]: E0909 00:32:55.703581 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.703630 kubelet[2703]: W0909 00:32:55.703599 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.703630 kubelet[2703]: E0909 00:32:55.703630 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.703902 kubelet[2703]: E0909 00:32:55.703885 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.703902 kubelet[2703]: W0909 00:32:55.703899 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.704006 kubelet[2703]: E0909 00:32:55.703916 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.704272 kubelet[2703]: E0909 00:32:55.704254 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.704272 kubelet[2703]: W0909 00:32:55.704271 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.704362 kubelet[2703]: E0909 00:32:55.704286 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.704671 kubelet[2703]: E0909 00:32:55.704594 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.704671 kubelet[2703]: W0909 00:32:55.704614 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.704671 kubelet[2703]: E0909 00:32:55.704635 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:55.710089 kubelet[2703]: E0909 00:32:55.710058 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:55.710089 kubelet[2703]: W0909 00:32:55.710091 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:55.710195 kubelet[2703]: E0909 00:32:55.710111 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:56.922459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123620040.mount: Deactivated successfully. Sep 9 00:32:57.462493 containerd[1592]: time="2025-09-09T00:32:57.462233283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:57.466976 containerd[1592]: time="2025-09-09T00:32:57.464721475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 00:32:57.467733 containerd[1592]: time="2025-09-09T00:32:57.467674946Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:57.471816 containerd[1592]: time="2025-09-09T00:32:57.471759365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:57.472798 containerd[1592]: time="2025-09-09T00:32:57.472741164Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.15382909s" Sep 9 00:32:57.472866 containerd[1592]: time="2025-09-09T00:32:57.472803078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:32:57.480858 containerd[1592]: time="2025-09-09T00:32:57.480781303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:32:57.504753 containerd[1592]: time="2025-09-09T00:32:57.504706625Z" level=info msg="CreateContainer within sandbox \"69f23a750d2d5b46f026f693b83599a7a26c94013a2c9bcf690f373b8fa4f281\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:32:57.523062 containerd[1592]: time="2025-09-09T00:32:57.523005293Z" level=info msg="CreateContainer within sandbox \"69f23a750d2d5b46f026f693b83599a7a26c94013a2c9bcf690f373b8fa4f281\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2a93b04770b27532ead964658d4a921562358be19403aa9f77b268d898f1152c\"" Sep 9 00:32:57.524547 kubelet[2703]: E0909 00:32:57.524501 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvfmz" podUID="45baac1d-c9f0-4704-a887-7b015b292f0b" Sep 9 00:32:57.526740 containerd[1592]: time="2025-09-09T00:32:57.526703794Z" level=info msg="StartContainer for \"2a93b04770b27532ead964658d4a921562358be19403aa9f77b268d898f1152c\"" Sep 9 00:32:57.604637 containerd[1592]: time="2025-09-09T00:32:57.604222745Z" level=info msg="StartContainer for \"2a93b04770b27532ead964658d4a921562358be19403aa9f77b268d898f1152c\" returns successfully" Sep 9 00:32:58.594789 kubelet[2703]: E0909 00:32:58.594441 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:58.610148 kubelet[2703]: I0909 00:32:58.610078 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b579468df-p4jfg" podStartSLOduration=2.453261259 podStartE2EDuration="4.610061039s" podCreationTimestamp="2025-09-09 00:32:54 +0000 UTC" firstStartedPulling="2025-09-09 00:32:55.317795749 +0000 UTC m=+16.903179452" lastFinishedPulling="2025-09-09 00:32:57.474595529 +0000 UTC m=+19.059979232" observedRunningTime="2025-09-09 00:32:58.609716916 +0000 UTC m=+20.195100629" watchObservedRunningTime="2025-09-09 00:32:58.610061039 +0000 UTC m=+20.195444732" Sep 9 00:32:58.614742 kubelet[2703]: E0909 00:32:58.614689 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.614742 kubelet[2703]: W0909 00:32:58.614715 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.614742 kubelet[2703]: E0909 00:32:58.614740 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.615059 kubelet[2703]: E0909 00:32:58.615037 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.615059 kubelet[2703]: W0909 00:32:58.615051 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.615059 kubelet[2703]: E0909 00:32:58.615062 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.615366 kubelet[2703]: E0909 00:32:58.615343 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.615462 kubelet[2703]: W0909 00:32:58.615366 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.615462 kubelet[2703]: E0909 00:32:58.615393 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.615783 kubelet[2703]: E0909 00:32:58.615767 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.615783 kubelet[2703]: W0909 00:32:58.615780 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.615892 kubelet[2703]: E0909 00:32:58.615792 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.616058 kubelet[2703]: E0909 00:32:58.616042 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.616058 kubelet[2703]: W0909 00:32:58.616054 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.616140 kubelet[2703]: E0909 00:32:58.616066 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.616337 kubelet[2703]: E0909 00:32:58.616315 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.616337 kubelet[2703]: W0909 00:32:58.616331 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.616443 kubelet[2703]: E0909 00:32:58.616344 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.616603 kubelet[2703]: E0909 00:32:58.616586 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.616603 kubelet[2703]: W0909 00:32:58.616597 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.616699 kubelet[2703]: E0909 00:32:58.616606 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.616854 kubelet[2703]: E0909 00:32:58.616837 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.616900 kubelet[2703]: W0909 00:32:58.616853 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.616900 kubelet[2703]: E0909 00:32:58.616868 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.617158 kubelet[2703]: E0909 00:32:58.617139 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.617158 kubelet[2703]: W0909 00:32:58.617152 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.617248 kubelet[2703]: E0909 00:32:58.617165 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.617435 kubelet[2703]: E0909 00:32:58.617421 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.617435 kubelet[2703]: W0909 00:32:58.617431 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.617523 kubelet[2703]: E0909 00:32:58.617442 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.617670 kubelet[2703]: E0909 00:32:58.617653 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.617670 kubelet[2703]: W0909 00:32:58.617664 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.617670 kubelet[2703]: E0909 00:32:58.617672 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.617919 kubelet[2703]: E0909 00:32:58.617901 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.617919 kubelet[2703]: W0909 00:32:58.617913 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.617919 kubelet[2703]: E0909 00:32:58.617923 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.618143 kubelet[2703]: E0909 00:32:58.618123 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.618143 kubelet[2703]: W0909 00:32:58.618136 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.618143 kubelet[2703]: E0909 00:32:58.618145 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.618458 kubelet[2703]: E0909 00:32:58.618440 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.618560 kubelet[2703]: W0909 00:32:58.618529 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.618560 kubelet[2703]: E0909 00:32:58.618558 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.618904 kubelet[2703]: E0909 00:32:58.618818 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.618904 kubelet[2703]: W0909 00:32:58.618833 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.618904 kubelet[2703]: E0909 00:32:58.618843 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.619197 kubelet[2703]: E0909 00:32:58.619179 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.619197 kubelet[2703]: W0909 00:32:58.619195 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.619276 kubelet[2703]: E0909 00:32:58.619210 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.619516 kubelet[2703]: E0909 00:32:58.619498 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.619516 kubelet[2703]: W0909 00:32:58.619514 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.619596 kubelet[2703]: E0909 00:32:58.619534 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.619830 kubelet[2703]: E0909 00:32:58.619802 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.619864 kubelet[2703]: W0909 00:32:58.619827 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.619864 kubelet[2703]: E0909 00:32:58.619847 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.620134 kubelet[2703]: E0909 00:32:58.620114 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.620164 kubelet[2703]: W0909 00:32:58.620133 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.620164 kubelet[2703]: E0909 00:32:58.620152 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.620473 kubelet[2703]: E0909 00:32:58.620404 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.620473 kubelet[2703]: W0909 00:32:58.620471 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.620553 kubelet[2703]: E0909 00:32:58.620492 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.620767 kubelet[2703]: E0909 00:32:58.620750 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.620767 kubelet[2703]: W0909 00:32:58.620765 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.620847 kubelet[2703]: E0909 00:32:58.620795 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.621141 kubelet[2703]: E0909 00:32:58.621124 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.621141 kubelet[2703]: W0909 00:32:58.621139 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.621241 kubelet[2703]: E0909 00:32:58.621220 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.621451 kubelet[2703]: E0909 00:32:58.621432 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.621451 kubelet[2703]: W0909 00:32:58.621448 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.621567 kubelet[2703]: E0909 00:32:58.621544 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.621776 kubelet[2703]: E0909 00:32:58.621759 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.621776 kubelet[2703]: W0909 00:32:58.621773 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.621855 kubelet[2703]: E0909 00:32:58.621791 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.622228 kubelet[2703]: E0909 00:32:58.622139 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.622228 kubelet[2703]: W0909 00:32:58.622155 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.622228 kubelet[2703]: E0909 00:32:58.622175 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.622406 kubelet[2703]: E0909 00:32:58.622384 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.622406 kubelet[2703]: W0909 00:32:58.622399 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.622518 kubelet[2703]: E0909 00:32:58.622475 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.622773 kubelet[2703]: E0909 00:32:58.622717 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.622773 kubelet[2703]: W0909 00:32:58.622729 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.622854 kubelet[2703]: E0909 00:32:58.622838 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.623234 kubelet[2703]: E0909 00:32:58.623134 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.623234 kubelet[2703]: W0909 00:32:58.623146 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.623234 kubelet[2703]: E0909 00:32:58.623225 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.623613 kubelet[2703]: E0909 00:32:58.623429 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.623613 kubelet[2703]: W0909 00:32:58.623447 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.623613 kubelet[2703]: E0909 00:32:58.623549 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.623793 kubelet[2703]: E0909 00:32:58.623775 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.623835 kubelet[2703]: W0909 00:32:58.623795 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.623953 kubelet[2703]: E0909 00:32:58.623836 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.624116 kubelet[2703]: E0909 00:32:58.624100 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.624255 kubelet[2703]: W0909 00:32:58.624116 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.624255 kubelet[2703]: E0909 00:32:58.624127 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.624404 kubelet[2703]: E0909 00:32:58.624374 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.624404 kubelet[2703]: W0909 00:32:58.624387 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.624404 kubelet[2703]: E0909 00:32:58.624398 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:58.624939 kubelet[2703]: E0909 00:32:58.624921 2703 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:32:58.624982 kubelet[2703]: W0909 00:32:58.624938 2703 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:32:58.624982 kubelet[2703]: E0909 00:32:58.624954 2703 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:32:59.158545 containerd[1592]: time="2025-09-09T00:32:59.158465206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:59.159392 containerd[1592]: time="2025-09-09T00:32:59.159343993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 00:32:59.161192 containerd[1592]: time="2025-09-09T00:32:59.161161196Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:59.163430 containerd[1592]: time="2025-09-09T00:32:59.163359977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:32:59.164020 containerd[1592]: time="2025-09-09T00:32:59.163993370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.683131315s" Sep 9 00:32:59.164063 containerd[1592]: time="2025-09-09T00:32:59.164023330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:32:59.172632 containerd[1592]: time="2025-09-09T00:32:59.172597228Z" level=info msg="CreateContainer within sandbox \"d924295075ba8269e317f5ad09b06d91c282d1d1206a080dd27cf99ff66f9509\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:32:59.190524 containerd[1592]: time="2025-09-09T00:32:59.190458057Z" level=info msg="CreateContainer within sandbox \"d924295075ba8269e317f5ad09b06d91c282d1d1206a080dd27cf99ff66f9509\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"353ad5c1d5c3326c0e471653445e713291e5cfe49abb0f4313a54538c21342d0\"" Sep 9 00:32:59.191055 containerd[1592]: time="2025-09-09T00:32:59.191017281Z" level=info msg="StartContainer for \"353ad5c1d5c3326c0e471653445e713291e5cfe49abb0f4313a54538c21342d0\"" Sep 9 00:32:59.265277 containerd[1592]: time="2025-09-09T00:32:59.265231718Z" level=info msg="StartContainer for \"353ad5c1d5c3326c0e471653445e713291e5cfe49abb0f4313a54538c21342d0\" returns successfully" Sep 9 00:32:59.334366 containerd[1592]: time="2025-09-09T00:32:59.334063494Z" level=info msg="shim disconnected" id=353ad5c1d5c3326c0e471653445e713291e5cfe49abb0f4313a54538c21342d0 namespace=k8s.io Sep 9 00:32:59.334366 containerd[1592]: time="2025-09-09T00:32:59.334148014Z" level=warning msg="cleaning up after shim disconnected" id=353ad5c1d5c3326c0e471653445e713291e5cfe49abb0f4313a54538c21342d0 namespace=k8s.io Sep 9 00:32:59.334366 containerd[1592]: time="2025-09-09T00:32:59.334160048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:32:59.490062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-353ad5c1d5c3326c0e471653445e713291e5cfe49abb0f4313a54538c21342d0-rootfs.mount: Deactivated successfully. Sep 9 00:32:59.523991 kubelet[2703]: E0909 00:32:59.523901 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvfmz" podUID="45baac1d-c9f0-4704-a887-7b015b292f0b" Sep 9 00:32:59.592283 kubelet[2703]: I0909 00:32:59.592252 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:32:59.592633 kubelet[2703]: E0909 00:32:59.592609 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:32:59.593351 containerd[1592]: time="2025-09-09T00:32:59.593310589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:33:01.522790 kubelet[2703]: E0909 00:33:01.522722 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvfmz" podUID="45baac1d-c9f0-4704-a887-7b015b292f0b" Sep 9 00:33:02.643528 kubelet[2703]: I0909 00:33:02.643458 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:33:02.644167 kubelet[2703]: E0909 00:33:02.644018 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:03.057135 containerd[1592]: time="2025-09-09T00:33:03.057090910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:03.058485 containerd[1592]: time="2025-09-09T00:33:03.058437228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:33:03.059716 containerd[1592]: time="2025-09-09T00:33:03.059686651Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:03.062176 containerd[1592]: time="2025-09-09T00:33:03.062099916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:03.062783 containerd[1592]: time="2025-09-09T00:33:03.062719507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.469308197s" Sep 9 00:33:03.062783 containerd[1592]: time="2025-09-09T00:33:03.062762693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:33:03.065050 containerd[1592]: time="2025-09-09T00:33:03.065005607Z" level=info msg="CreateContainer within sandbox \"d924295075ba8269e317f5ad09b06d91c282d1d1206a080dd27cf99ff66f9509\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:33:03.080244 containerd[1592]: time="2025-09-09T00:33:03.080194627Z" level=info msg="CreateContainer within sandbox \"d924295075ba8269e317f5ad09b06d91c282d1d1206a080dd27cf99ff66f9509\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5c9e76f00a5dc8920a2b1e9f39a20cdf0130cb2dbbd5dfd4b2b433ba381db4d4\"" Sep 9 00:33:03.081028 containerd[1592]: time="2025-09-09T00:33:03.080992065Z" level=info msg="StartContainer for \"5c9e76f00a5dc8920a2b1e9f39a20cdf0130cb2dbbd5dfd4b2b433ba381db4d4\"" Sep 9 00:33:03.146561 containerd[1592]: time="2025-09-09T00:33:03.146512132Z" level=info msg="StartContainer for \"5c9e76f00a5dc8920a2b1e9f39a20cdf0130cb2dbbd5dfd4b2b433ba381db4d4\" returns successfully" Sep 9 00:33:03.522823 kubelet[2703]: E0909 00:33:03.522750 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvfmz" podUID="45baac1d-c9f0-4704-a887-7b015b292f0b" Sep 9 00:33:03.601072 kubelet[2703]: E0909 00:33:03.601032 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:04.586686 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c9e76f00a5dc8920a2b1e9f39a20cdf0130cb2dbbd5dfd4b2b433ba381db4d4-rootfs.mount: Deactivated successfully. Sep 9 00:33:04.589173 containerd[1592]: time="2025-09-09T00:33:04.589120711Z" level=info msg="shim disconnected" id=5c9e76f00a5dc8920a2b1e9f39a20cdf0130cb2dbbd5dfd4b2b433ba381db4d4 namespace=k8s.io Sep 9 00:33:04.589173 containerd[1592]: time="2025-09-09T00:33:04.589170129Z" level=warning msg="cleaning up after shim disconnected" id=5c9e76f00a5dc8920a2b1e9f39a20cdf0130cb2dbbd5dfd4b2b433ba381db4d4 namespace=k8s.io Sep 9 00:33:04.589633 containerd[1592]: time="2025-09-09T00:33:04.589180660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:33:04.599051 kubelet[2703]: I0909 00:33:04.599028 2703 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 00:33:04.638205 containerd[1592]: time="2025-09-09T00:33:04.636526054Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:33:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 9 00:33:04.771562 kubelet[2703]: I0909 00:33:04.769992 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25010456-202e-41ea-aa9f-fe497ae64e66-tigera-ca-bundle\") pod \"calico-kube-controllers-55bfc5d889-9l6s4\" (UID: \"25010456-202e-41ea-aa9f-fe497ae64e66\") " pod="calico-system/calico-kube-controllers-55bfc5d889-9l6s4" Sep 9 00:33:04.771562 kubelet[2703]: I0909 00:33:04.770053 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2jsq\" (UniqueName: \"kubernetes.io/projected/e073184d-6f60-4919-94e6-d04e6ac8bc91-kube-api-access-l2jsq\") pod \"calico-apiserver-6544c75f8-qw6bt\" (UID: \"e073184d-6f60-4919-94e6-d04e6ac8bc91\") " pod="calico-apiserver/calico-apiserver-6544c75f8-qw6bt" Sep 9 00:33:04.771562 kubelet[2703]: I0909 00:33:04.770079 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1a0e64da-2e8f-4229-9092-4e3f71b7565b-goldmane-key-pair\") pod \"goldmane-7988f88666-csr6g\" (UID: \"1a0e64da-2e8f-4229-9092-4e3f71b7565b\") " pod="calico-system/goldmane-7988f88666-csr6g" Sep 9 00:33:04.771562 kubelet[2703]: I0909 00:33:04.770104 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e073184d-6f60-4919-94e6-d04e6ac8bc91-calico-apiserver-certs\") pod \"calico-apiserver-6544c75f8-qw6bt\" (UID: \"e073184d-6f60-4919-94e6-d04e6ac8bc91\") " pod="calico-apiserver/calico-apiserver-6544c75f8-qw6bt" Sep 9 00:33:04.771562 kubelet[2703]: I0909 00:33:04.770124 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cba14676-c2d2-4393-a6fd-b4ef0dc67fba-calico-apiserver-certs\") pod \"calico-apiserver-6544c75f8-n8q9k\" (UID: \"cba14676-c2d2-4393-a6fd-b4ef0dc67fba\") " pod="calico-apiserver/calico-apiserver-6544c75f8-n8q9k" Sep 9 00:33:04.771848 kubelet[2703]: I0909 00:33:04.770145 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a104faab-ebd4-4510-b157-e4917f6c56e1-config-volume\") pod \"coredns-7c65d6cfc9-45wzl\" (UID: \"a104faab-ebd4-4510-b157-e4917f6c56e1\") " pod="kube-system/coredns-7c65d6cfc9-45wzl" Sep 9 00:33:04.771848 kubelet[2703]: I0909 00:33:04.770167 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fz9\" (UniqueName: \"kubernetes.io/projected/ec5d7db1-2706-41c7-b992-bd43c3dcfac0-kube-api-access-k6fz9\") pod \"coredns-7c65d6cfc9-4m9xw\" (UID: \"ec5d7db1-2706-41c7-b992-bd43c3dcfac0\") " pod="kube-system/coredns-7c65d6cfc9-4m9xw" Sep 9 00:33:04.771848 kubelet[2703]: I0909 00:33:04.770184 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a0e64da-2e8f-4229-9092-4e3f71b7565b-config\") pod \"goldmane-7988f88666-csr6g\" (UID: \"1a0e64da-2e8f-4229-9092-4e3f71b7565b\") " pod="calico-system/goldmane-7988f88666-csr6g" Sep 9 00:33:04.771848 kubelet[2703]: I0909 00:33:04.770212 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a0e64da-2e8f-4229-9092-4e3f71b7565b-goldmane-ca-bundle\") pod \"goldmane-7988f88666-csr6g\" (UID: \"1a0e64da-2e8f-4229-9092-4e3f71b7565b\") " pod="calico-system/goldmane-7988f88666-csr6g" Sep 9 00:33:04.771848 kubelet[2703]: I0909 00:33:04.770228 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m666f\" (UniqueName: \"kubernetes.io/projected/1a0e64da-2e8f-4229-9092-4e3f71b7565b-kube-api-access-m666f\") pod \"goldmane-7988f88666-csr6g\" (UID: \"1a0e64da-2e8f-4229-9092-4e3f71b7565b\") " pod="calico-system/goldmane-7988f88666-csr6g" Sep 9 00:33:04.771992 kubelet[2703]: I0909 00:33:04.770248 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c48bt\" (UniqueName: \"kubernetes.io/projected/79dee799-ef56-4018-b0a2-0bab65c4eb15-kube-api-access-c48bt\") pod \"whisker-864684cb86-jwk6q\" (UID: \"79dee799-ef56-4018-b0a2-0bab65c4eb15\") " pod="calico-system/whisker-864684cb86-jwk6q" Sep 9 00:33:04.771992 kubelet[2703]: I0909 00:33:04.770265 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec5d7db1-2706-41c7-b992-bd43c3dcfac0-config-volume\") pod \"coredns-7c65d6cfc9-4m9xw\" (UID: \"ec5d7db1-2706-41c7-b992-bd43c3dcfac0\") " pod="kube-system/coredns-7c65d6cfc9-4m9xw" Sep 9 00:33:04.771992 kubelet[2703]: I0909 00:33:04.770282 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfqhq\" (UniqueName: \"kubernetes.io/projected/cba14676-c2d2-4393-a6fd-b4ef0dc67fba-kube-api-access-nfqhq\") pod \"calico-apiserver-6544c75f8-n8q9k\" (UID: \"cba14676-c2d2-4393-a6fd-b4ef0dc67fba\") " pod="calico-apiserver/calico-apiserver-6544c75f8-n8q9k" Sep 9 00:33:04.771992 kubelet[2703]: I0909 00:33:04.770300 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qkt\" (UniqueName: \"kubernetes.io/projected/a104faab-ebd4-4510-b157-e4917f6c56e1-kube-api-access-l9qkt\") pod \"coredns-7c65d6cfc9-45wzl\" (UID: \"a104faab-ebd4-4510-b157-e4917f6c56e1\") " pod="kube-system/coredns-7c65d6cfc9-45wzl" Sep 9 00:33:04.771992 kubelet[2703]: I0909 00:33:04.770319 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/79dee799-ef56-4018-b0a2-0bab65c4eb15-whisker-backend-key-pair\") pod \"whisker-864684cb86-jwk6q\" (UID: \"79dee799-ef56-4018-b0a2-0bab65c4eb15\") " pod="calico-system/whisker-864684cb86-jwk6q" Sep 9 00:33:04.772131 kubelet[2703]: I0909 00:33:04.770342 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79dee799-ef56-4018-b0a2-0bab65c4eb15-whisker-ca-bundle\") pod \"whisker-864684cb86-jwk6q\" (UID: \"79dee799-ef56-4018-b0a2-0bab65c4eb15\") " pod="calico-system/whisker-864684cb86-jwk6q" Sep 9 00:33:04.772131 kubelet[2703]: I0909 00:33:04.770377 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwhbh\" (UniqueName: \"kubernetes.io/projected/25010456-202e-41ea-aa9f-fe497ae64e66-kube-api-access-cwhbh\") pod \"calico-kube-controllers-55bfc5d889-9l6s4\" (UID: \"25010456-202e-41ea-aa9f-fe497ae64e66\") " pod="calico-system/calico-kube-controllers-55bfc5d889-9l6s4" Sep 9 00:33:04.941171 containerd[1592]: time="2025-09-09T00:33:04.941101734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-csr6g,Uid:1a0e64da-2e8f-4229-9092-4e3f71b7565b,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:04.944403 kubelet[2703]: E0909 00:33:04.944364 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:04.944887 containerd[1592]: time="2025-09-09T00:33:04.944844187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4m9xw,Uid:ec5d7db1-2706-41c7-b992-bd43c3dcfac0,Namespace:kube-system,Attempt:0,}" Sep 9 00:33:04.952115 containerd[1592]: time="2025-09-09T00:33:04.952072177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6544c75f8-qw6bt,Uid:e073184d-6f60-4919-94e6-d04e6ac8bc91,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:33:04.952991 kubelet[2703]: E0909 00:33:04.952236 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:04.953998 containerd[1592]: time="2025-09-09T00:33:04.953003341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-45wzl,Uid:a104faab-ebd4-4510-b157-e4917f6c56e1,Namespace:kube-system,Attempt:0,}" Sep 9 00:33:04.956858 containerd[1592]: time="2025-09-09T00:33:04.956826775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864684cb86-jwk6q,Uid:79dee799-ef56-4018-b0a2-0bab65c4eb15,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:04.956968 containerd[1592]: time="2025-09-09T00:33:04.956881995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6544c75f8-n8q9k,Uid:cba14676-c2d2-4393-a6fd-b4ef0dc67fba,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:33:04.956968 containerd[1592]: time="2025-09-09T00:33:04.956831665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55bfc5d889-9l6s4,Uid:25010456-202e-41ea-aa9f-fe497ae64e66,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:05.158465 containerd[1592]: time="2025-09-09T00:33:05.158386600Z" level=error msg="Failed to destroy network for sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.164190 containerd[1592]: time="2025-09-09T00:33:05.163454649Z" level=error msg="encountered an error cleaning up failed sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.164190 containerd[1592]: time="2025-09-09T00:33:05.163537274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-csr6g,Uid:1a0e64da-2e8f-4229-9092-4e3f71b7565b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.167687 containerd[1592]: time="2025-09-09T00:33:05.167628519Z" level=error msg="Failed to destroy network for sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.168149 containerd[1592]: time="2025-09-09T00:33:05.168118189Z" level=error msg="encountered an error cleaning up failed sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.168189 containerd[1592]: time="2025-09-09T00:33:05.168173720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4m9xw,Uid:ec5d7db1-2706-41c7-b992-bd43c3dcfac0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.175068 kubelet[2703]: E0909 00:33:05.175030 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.177056 kubelet[2703]: E0909 00:33:05.176456 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.177056 kubelet[2703]: E0909 00:33:05.176609 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-csr6g" Sep 9 00:33:05.177056 kubelet[2703]: E0909 00:33:05.176633 2703 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-csr6g" Sep 9 00:33:05.177056 kubelet[2703]: E0909 00:33:05.176677 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4m9xw" Sep 9 00:33:05.177197 kubelet[2703]: E0909 00:33:05.176689 2703 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4m9xw" Sep 9 00:33:05.177197 kubelet[2703]: E0909 00:33:05.176970 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-csr6g_calico-system(1a0e64da-2e8f-4229-9092-4e3f71b7565b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-csr6g_calico-system(1a0e64da-2e8f-4229-9092-4e3f71b7565b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-csr6g" podUID="1a0e64da-2e8f-4229-9092-4e3f71b7565b" Sep 9 00:33:05.177197 kubelet[2703]: E0909 00:33:05.177024 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4m9xw_kube-system(ec5d7db1-2706-41c7-b992-bd43c3dcfac0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4m9xw_kube-system(ec5d7db1-2706-41c7-b992-bd43c3dcfac0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4m9xw" podUID="ec5d7db1-2706-41c7-b992-bd43c3dcfac0" Sep 9 00:33:05.179667 containerd[1592]: time="2025-09-09T00:33:05.179044377Z" level=error msg="Failed to destroy network for sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.179667 containerd[1592]: time="2025-09-09T00:33:05.179518627Z" level=error msg="encountered an error cleaning up failed sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.179667 containerd[1592]: time="2025-09-09T00:33:05.179560781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-864684cb86-jwk6q,Uid:79dee799-ef56-4018-b0a2-0bab65c4eb15,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.180323 kubelet[2703]: E0909 00:33:05.180118 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.180323 kubelet[2703]: E0909 00:33:05.180185 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-864684cb86-jwk6q" Sep 9 00:33:05.180323 kubelet[2703]: E0909 00:33:05.180214 2703 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-864684cb86-jwk6q" Sep 9 00:33:05.180502 kubelet[2703]: E0909 00:33:05.180277 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-864684cb86-jwk6q_calico-system(79dee799-ef56-4018-b0a2-0bab65c4eb15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-864684cb86-jwk6q_calico-system(79dee799-ef56-4018-b0a2-0bab65c4eb15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-864684cb86-jwk6q" podUID="79dee799-ef56-4018-b0a2-0bab65c4eb15" Sep 9 00:33:05.195108 containerd[1592]: time="2025-09-09T00:33:05.194944907Z" level=error msg="Failed to destroy network for sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.195445 containerd[1592]: time="2025-09-09T00:33:05.195389046Z" level=error msg="encountered an error cleaning up failed sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.195509 containerd[1592]: time="2025-09-09T00:33:05.195458635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6544c75f8-n8q9k,Uid:cba14676-c2d2-4393-a6fd-b4ef0dc67fba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.195755 kubelet[2703]: E0909 00:33:05.195708 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.195818 kubelet[2703]: E0909 00:33:05.195783 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6544c75f8-n8q9k" Sep 9 00:33:05.195859 kubelet[2703]: E0909 00:33:05.195821 2703 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6544c75f8-n8q9k" Sep 9 00:33:05.195893 kubelet[2703]: E0909 00:33:05.195868 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6544c75f8-n8q9k_calico-apiserver(cba14676-c2d2-4393-a6fd-b4ef0dc67fba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6544c75f8-n8q9k_calico-apiserver(cba14676-c2d2-4393-a6fd-b4ef0dc67fba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6544c75f8-n8q9k" podUID="cba14676-c2d2-4393-a6fd-b4ef0dc67fba" Sep 9 00:33:05.198566 containerd[1592]: time="2025-09-09T00:33:05.198344528Z" level=error msg="Failed to destroy network for sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.198566 containerd[1592]: time="2025-09-09T00:33:05.198345271Z" level=error msg="Failed to destroy network for sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.198805 containerd[1592]: time="2025-09-09T00:33:05.198774899Z" level=error msg="encountered an error cleaning up failed sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.198837 containerd[1592]: time="2025-09-09T00:33:05.198816512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-45wzl,Uid:a104faab-ebd4-4510-b157-e4917f6c56e1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.199004 kubelet[2703]: E0909 00:33:05.198962 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.199071 kubelet[2703]: E0909 00:33:05.199013 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-45wzl" Sep 9 00:33:05.199071 kubelet[2703]: E0909 00:33:05.199035 2703 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-45wzl" Sep 9 00:33:05.199161 kubelet[2703]: E0909 00:33:05.199071 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-45wzl_kube-system(a104faab-ebd4-4510-b157-e4917f6c56e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-45wzl_kube-system(a104faab-ebd4-4510-b157-e4917f6c56e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-45wzl" podUID="a104faab-ebd4-4510-b157-e4917f6c56e1" Sep 9 00:33:05.199221 containerd[1592]: time="2025-09-09T00:33:05.199110761Z" level=error msg="encountered an error cleaning up failed sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.199221 containerd[1592]: time="2025-09-09T00:33:05.199147585Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55bfc5d889-9l6s4,Uid:25010456-202e-41ea-aa9f-fe497ae64e66,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.199311 kubelet[2703]: E0909 00:33:05.199278 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.199352 kubelet[2703]: E0909 00:33:05.199308 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55bfc5d889-9l6s4" Sep 9 00:33:05.199352 kubelet[2703]: E0909 00:33:05.199324 2703 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55bfc5d889-9l6s4" Sep 9 00:33:05.199432 kubelet[2703]: E0909 00:33:05.199356 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55bfc5d889-9l6s4_calico-system(25010456-202e-41ea-aa9f-fe497ae64e66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55bfc5d889-9l6s4_calico-system(25010456-202e-41ea-aa9f-fe497ae64e66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55bfc5d889-9l6s4" podUID="25010456-202e-41ea-aa9f-fe497ae64e66" Sep 9 00:33:05.220418 containerd[1592]: time="2025-09-09T00:33:05.220351365Z" level=error msg="Failed to destroy network for sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.220860 containerd[1592]: time="2025-09-09T00:33:05.220827077Z" level=error msg="encountered an error cleaning up failed sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.220913 containerd[1592]: time="2025-09-09T00:33:05.220878720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6544c75f8-qw6bt,Uid:e073184d-6f60-4919-94e6-d04e6ac8bc91,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.221191 kubelet[2703]: E0909 00:33:05.221134 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.221191 kubelet[2703]: E0909 00:33:05.221193 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6544c75f8-qw6bt" Sep 9 00:33:05.221369 kubelet[2703]: E0909 00:33:05.221212 2703 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6544c75f8-qw6bt" Sep 9 00:33:05.221369 kubelet[2703]: E0909 00:33:05.221258 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6544c75f8-qw6bt_calico-apiserver(e073184d-6f60-4919-94e6-d04e6ac8bc91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6544c75f8-qw6bt_calico-apiserver(e073184d-6f60-4919-94e6-d04e6ac8bc91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6544c75f8-qw6bt" podUID="e073184d-6f60-4919-94e6-d04e6ac8bc91" Sep 9 00:33:05.526506 containerd[1592]: time="2025-09-09T00:33:05.526333974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvfmz,Uid:45baac1d-c9f0-4704-a887-7b015b292f0b,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:05.605637 kubelet[2703]: I0909 00:33:05.605594 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:05.606761 kubelet[2703]: I0909 00:33:05.606739 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:05.608762 kubelet[2703]: I0909 00:33:05.608726 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:05.610347 kubelet[2703]: I0909 00:33:05.610326 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:05.623084 containerd[1592]: time="2025-09-09T00:33:05.623026263Z" level=info msg="StopPodSandbox for \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\"" Sep 9 00:33:05.631486 containerd[1592]: time="2025-09-09T00:33:05.631224635Z" level=info msg="Ensure that sandbox 28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110 in task-service has been cleanup successfully" Sep 9 00:33:05.638797 containerd[1592]: time="2025-09-09T00:33:05.638762686Z" level=info msg="StopPodSandbox for \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\"" Sep 9 00:33:05.639069 containerd[1592]: time="2025-09-09T00:33:05.639038717Z" level=info msg="Ensure that sandbox 3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7 in task-service has been cleanup successfully" Sep 9 00:33:05.640549 containerd[1592]: time="2025-09-09T00:33:05.640506714Z" level=info msg="StopPodSandbox for \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\"" Sep 9 00:33:05.640732 containerd[1592]: time="2025-09-09T00:33:05.640711423Z" level=info msg="Ensure that sandbox 840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486 in task-service has been cleanup successfully" Sep 9 00:33:05.641501 containerd[1592]: time="2025-09-09T00:33:05.641457787Z" level=info msg="StopPodSandbox for \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\"" Sep 9 00:33:05.641687 containerd[1592]: time="2025-09-09T00:33:05.641661524Z" level=info msg="Ensure that sandbox 41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a in task-service has been cleanup successfully" Sep 9 00:33:05.644924 kubelet[2703]: I0909 00:33:05.644904 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:05.645923 containerd[1592]: time="2025-09-09T00:33:05.645779052Z" level=info msg="StopPodSandbox for \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\"" Sep 9 00:33:05.648474 kubelet[2703]: I0909 00:33:05.647033 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:05.648701 containerd[1592]: time="2025-09-09T00:33:05.648682551Z" level=info msg="StopPodSandbox for \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\"" Sep 9 00:33:05.648908 containerd[1592]: time="2025-09-09T00:33:05.648892340Z" level=info msg="Ensure that sandbox 99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e in task-service has been cleanup successfully" Sep 9 00:33:05.673455 kubelet[2703]: I0909 00:33:05.671786 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:05.673634 containerd[1592]: time="2025-09-09T00:33:05.672376812Z" level=info msg="StopPodSandbox for \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\"" Sep 9 00:33:05.673634 containerd[1592]: time="2025-09-09T00:33:05.672632323Z" level=info msg="Ensure that sandbox d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3 in task-service has been cleanup successfully" Sep 9 00:33:05.680849 containerd[1592]: time="2025-09-09T00:33:05.678979892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:33:05.681488 containerd[1592]: time="2025-09-09T00:33:05.681397259Z" level=info msg="Ensure that sandbox dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a in task-service has been cleanup successfully" Sep 9 00:33:05.722090 containerd[1592]: time="2025-09-09T00:33:05.722019646Z" level=error msg="StopPodSandbox for \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\" failed" error="failed to destroy network for sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.722388 kubelet[2703]: E0909 00:33:05.722325 2703 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:05.722528 kubelet[2703]: E0909 00:33:05.722401 2703 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a"} Sep 9 00:33:05.722528 kubelet[2703]: E0909 00:33:05.722514 2703 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a0e64da-2e8f-4229-9092-4e3f71b7565b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:05.722643 kubelet[2703]: E0909 00:33:05.722547 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a0e64da-2e8f-4229-9092-4e3f71b7565b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-csr6g" podUID="1a0e64da-2e8f-4229-9092-4e3f71b7565b" Sep 9 00:33:05.725007 containerd[1592]: time="2025-09-09T00:33:05.724882073Z" level=error msg="StopPodSandbox for \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\" failed" error="failed to destroy network for sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.725147 kubelet[2703]: E0909 00:33:05.725074 2703 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:05.725198 kubelet[2703]: E0909 00:33:05.725147 2703 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486"} Sep 9 00:33:05.725198 kubelet[2703]: E0909 00:33:05.725172 2703 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e073184d-6f60-4919-94e6-d04e6ac8bc91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:05.725198 kubelet[2703]: E0909 00:33:05.725190 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e073184d-6f60-4919-94e6-d04e6ac8bc91\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6544c75f8-qw6bt" podUID="e073184d-6f60-4919-94e6-d04e6ac8bc91" Sep 9 00:33:05.731747 containerd[1592]: time="2025-09-09T00:33:05.731675544Z" level=error msg="StopPodSandbox for \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\" failed" error="failed to destroy network for sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.732934 kubelet[2703]: E0909 00:33:05.732876 2703 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:05.732982 kubelet[2703]: E0909 00:33:05.732952 2703 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7"} Sep 9 00:33:05.733028 kubelet[2703]: E0909 00:33:05.733010 2703 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a104faab-ebd4-4510-b157-e4917f6c56e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:05.733101 kubelet[2703]: E0909 00:33:05.733045 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a104faab-ebd4-4510-b157-e4917f6c56e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-45wzl" podUID="a104faab-ebd4-4510-b157-e4917f6c56e1" Sep 9 00:33:05.755559 containerd[1592]: time="2025-09-09T00:33:05.755468984Z" level=error msg="StopPodSandbox for \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\" failed" error="failed to destroy network for sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.755911 kubelet[2703]: E0909 00:33:05.755861 2703 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:05.756029 kubelet[2703]: E0909 00:33:05.755928 2703 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3"} Sep 9 00:33:05.756029 kubelet[2703]: E0909 00:33:05.755972 2703 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cba14676-c2d2-4393-a6fd-b4ef0dc67fba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:05.756029 kubelet[2703]: E0909 00:33:05.755997 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cba14676-c2d2-4393-a6fd-b4ef0dc67fba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6544c75f8-n8q9k" podUID="cba14676-c2d2-4393-a6fd-b4ef0dc67fba" Sep 9 00:33:05.759777 containerd[1592]: time="2025-09-09T00:33:05.757702182Z" level=error msg="StopPodSandbox for \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\" failed" error="failed to destroy network for sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.759777 containerd[1592]: time="2025-09-09T00:33:05.759005009Z" level=error msg="StopPodSandbox for \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\" failed" error="failed to destroy network for sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.759865 kubelet[2703]: E0909 00:33:05.757907 2703 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:05.759865 kubelet[2703]: E0909 00:33:05.757941 2703 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e"} Sep 9 00:33:05.759865 kubelet[2703]: E0909 00:33:05.757963 2703 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec5d7db1-2706-41c7-b992-bd43c3dcfac0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:05.759865 kubelet[2703]: E0909 00:33:05.757978 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec5d7db1-2706-41c7-b992-bd43c3dcfac0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4m9xw" podUID="ec5d7db1-2706-41c7-b992-bd43c3dcfac0" Sep 9 00:33:05.760147 kubelet[2703]: E0909 00:33:05.759251 2703 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:05.760147 kubelet[2703]: E0909 00:33:05.759280 2703 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a"} Sep 9 00:33:05.760147 kubelet[2703]: E0909 00:33:05.759305 2703 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"25010456-202e-41ea-aa9f-fe497ae64e66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:05.760147 kubelet[2703]: E0909 00:33:05.759331 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"25010456-202e-41ea-aa9f-fe497ae64e66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55bfc5d889-9l6s4" podUID="25010456-202e-41ea-aa9f-fe497ae64e66" Sep 9 00:33:05.760771 containerd[1592]: time="2025-09-09T00:33:05.760696682Z" level=error msg="StopPodSandbox for \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\" failed" error="failed to destroy network for sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.761034 kubelet[2703]: E0909 00:33:05.761001 2703 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:05.761034 kubelet[2703]: E0909 00:33:05.761031 2703 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110"} Sep 9 00:33:05.761151 kubelet[2703]: E0909 00:33:05.761051 2703 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79dee799-ef56-4018-b0a2-0bab65c4eb15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:05.761151 kubelet[2703]: E0909 00:33:05.761070 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79dee799-ef56-4018-b0a2-0bab65c4eb15\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-864684cb86-jwk6q" podUID="79dee799-ef56-4018-b0a2-0bab65c4eb15" Sep 9 00:33:05.812661 containerd[1592]: time="2025-09-09T00:33:05.812516050Z" level=error msg="Failed to destroy network for sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.812971 containerd[1592]: time="2025-09-09T00:33:05.812935991Z" level=error msg="encountered an error cleaning up failed sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.813236 containerd[1592]: time="2025-09-09T00:33:05.813143516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvfmz,Uid:45baac1d-c9f0-4704-a887-7b015b292f0b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.813527 kubelet[2703]: E0909 00:33:05.813461 2703 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:05.813606 kubelet[2703]: E0909 00:33:05.813572 2703 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dvfmz" Sep 9 00:33:05.813606 kubelet[2703]: E0909 00:33:05.813594 2703 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dvfmz" Sep 9 00:33:05.813809 kubelet[2703]: E0909 00:33:05.813647 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dvfmz_calico-system(45baac1d-c9f0-4704-a887-7b015b292f0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dvfmz_calico-system(45baac1d-c9f0-4704-a887-7b015b292f0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dvfmz" podUID="45baac1d-c9f0-4704-a887-7b015b292f0b" Sep 9 00:33:05.816227 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793-shm.mount: Deactivated successfully. Sep 9 00:33:06.674383 kubelet[2703]: I0909 00:33:06.674333 2703 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:06.674998 containerd[1592]: time="2025-09-09T00:33:06.674952986Z" level=info msg="StopPodSandbox for \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\"" Sep 9 00:33:06.675361 containerd[1592]: time="2025-09-09T00:33:06.675131985Z" level=info msg="Ensure that sandbox 31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793 in task-service has been cleanup successfully" Sep 9 00:33:06.703879 containerd[1592]: time="2025-09-09T00:33:06.703815520Z" level=error msg="StopPodSandbox for \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\" failed" error="failed to destroy network for sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:33:06.704198 kubelet[2703]: E0909 00:33:06.704151 2703 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:06.704267 kubelet[2703]: E0909 00:33:06.704216 2703 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793"} Sep 9 00:33:06.704312 kubelet[2703]: E0909 00:33:06.704265 2703 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"45baac1d-c9f0-4704-a887-7b015b292f0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:33:06.704312 kubelet[2703]: E0909 00:33:06.704295 2703 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"45baac1d-c9f0-4704-a887-7b015b292f0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dvfmz" podUID="45baac1d-c9f0-4704-a887-7b015b292f0b" Sep 9 00:33:10.066626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088815249.mount: Deactivated successfully. Sep 9 00:33:10.589708 containerd[1592]: time="2025-09-09T00:33:10.589623546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:10.590611 containerd[1592]: time="2025-09-09T00:33:10.590515757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:33:10.592115 containerd[1592]: time="2025-09-09T00:33:10.592061500Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:10.594812 containerd[1592]: time="2025-09-09T00:33:10.594743359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:10.595355 containerd[1592]: time="2025-09-09T00:33:10.595306042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 4.916252402s" Sep 9 00:33:10.595355 containerd[1592]: time="2025-09-09T00:33:10.595344649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:33:10.608954 containerd[1592]: time="2025-09-09T00:33:10.608886712Z" level=info msg="CreateContainer within sandbox \"d924295075ba8269e317f5ad09b06d91c282d1d1206a080dd27cf99ff66f9509\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:33:10.657205 containerd[1592]: time="2025-09-09T00:33:10.657132543Z" level=info msg="CreateContainer within sandbox \"d924295075ba8269e317f5ad09b06d91c282d1d1206a080dd27cf99ff66f9509\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b73e803d8dd35c21e7eb2f973078d490f22d4a405751e1ccc0f8035ef2893a90\"" Sep 9 00:33:10.657882 containerd[1592]: time="2025-09-09T00:33:10.657837739Z" level=info msg="StartContainer for \"b73e803d8dd35c21e7eb2f973078d490f22d4a405751e1ccc0f8035ef2893a90\"" Sep 9 00:33:10.752706 containerd[1592]: time="2025-09-09T00:33:10.752652954Z" level=info msg="StartContainer for \"b73e803d8dd35c21e7eb2f973078d490f22d4a405751e1ccc0f8035ef2893a90\" returns successfully" Sep 9 00:33:10.850509 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:33:10.851276 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:33:10.936859 containerd[1592]: time="2025-09-09T00:33:10.936805579Z" level=info msg="StopPodSandbox for \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\"" Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.033 [INFO][3953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.034 [INFO][3953] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" iface="eth0" netns="/var/run/netns/cni-95939448-f4d0-7bdc-fdd2-a8fc58c11995" Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.034 [INFO][3953] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" iface="eth0" netns="/var/run/netns/cni-95939448-f4d0-7bdc-fdd2-a8fc58c11995" Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.035 [INFO][3953] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" iface="eth0" netns="/var/run/netns/cni-95939448-f4d0-7bdc-fdd2-a8fc58c11995" Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.035 [INFO][3953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.035 [INFO][3953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.105 [INFO][3964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" HandleID="k8s-pod-network.28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Workload="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.106 [INFO][3964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.106 [INFO][3964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.113 [WARNING][3964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" HandleID="k8s-pod-network.28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Workload="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.113 [INFO][3964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" HandleID="k8s-pod-network.28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Workload="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.116 [INFO][3964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:11.123004 containerd[1592]: 2025-09-09 00:33:11.120 [INFO][3953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:11.123612 containerd[1592]: time="2025-09-09T00:33:11.123214988Z" level=info msg="TearDown network for sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\" successfully" Sep 9 00:33:11.123612 containerd[1592]: time="2025-09-09T00:33:11.123253114Z" level=info msg="StopPodSandbox for \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\" returns successfully" Sep 9 00:33:11.125946 systemd[1]: run-netns-cni\x2d95939448\x2df4d0\x2d7bdc\x2dfdd2\x2da8fc58c11995.mount: Deactivated successfully. Sep 9 00:33:11.313704 kubelet[2703]: I0909 00:33:11.313624 2703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79dee799-ef56-4018-b0a2-0bab65c4eb15-whisker-ca-bundle\") pod \"79dee799-ef56-4018-b0a2-0bab65c4eb15\" (UID: \"79dee799-ef56-4018-b0a2-0bab65c4eb15\") " Sep 9 00:33:11.313704 kubelet[2703]: I0909 00:33:11.313693 2703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/79dee799-ef56-4018-b0a2-0bab65c4eb15-whisker-backend-key-pair\") pod \"79dee799-ef56-4018-b0a2-0bab65c4eb15\" (UID: \"79dee799-ef56-4018-b0a2-0bab65c4eb15\") " Sep 9 00:33:11.314260 kubelet[2703]: I0909 00:33:11.313722 2703 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c48bt\" (UniqueName: \"kubernetes.io/projected/79dee799-ef56-4018-b0a2-0bab65c4eb15-kube-api-access-c48bt\") pod \"79dee799-ef56-4018-b0a2-0bab65c4eb15\" (UID: \"79dee799-ef56-4018-b0a2-0bab65c4eb15\") " Sep 9 00:33:11.314260 kubelet[2703]: I0909 00:33:11.314225 2703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79dee799-ef56-4018-b0a2-0bab65c4eb15-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "79dee799-ef56-4018-b0a2-0bab65c4eb15" (UID: "79dee799-ef56-4018-b0a2-0bab65c4eb15"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:33:11.320429 kubelet[2703]: I0909 00:33:11.318550 2703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79dee799-ef56-4018-b0a2-0bab65c4eb15-kube-api-access-c48bt" (OuterVolumeSpecName: "kube-api-access-c48bt") pod "79dee799-ef56-4018-b0a2-0bab65c4eb15" (UID: "79dee799-ef56-4018-b0a2-0bab65c4eb15"). InnerVolumeSpecName "kube-api-access-c48bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:33:11.320429 kubelet[2703]: I0909 00:33:11.318551 2703 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79dee799-ef56-4018-b0a2-0bab65c4eb15-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "79dee799-ef56-4018-b0a2-0bab65c4eb15" (UID: "79dee799-ef56-4018-b0a2-0bab65c4eb15"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:33:11.320640 systemd[1]: var-lib-kubelet-pods-79dee799\x2def56\x2d4018\x2db0a2\x2d0bab65c4eb15-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc48bt.mount: Deactivated successfully. Sep 9 00:33:11.320822 systemd[1]: var-lib-kubelet-pods-79dee799\x2def56\x2d4018\x2db0a2\x2d0bab65c4eb15-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:33:11.414173 kubelet[2703]: I0909 00:33:11.414121 2703 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79dee799-ef56-4018-b0a2-0bab65c4eb15-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:33:11.414173 kubelet[2703]: I0909 00:33:11.414155 2703 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/79dee799-ef56-4018-b0a2-0bab65c4eb15-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:33:11.414173 kubelet[2703]: I0909 00:33:11.414165 2703 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c48bt\" (UniqueName: \"kubernetes.io/projected/79dee799-ef56-4018-b0a2-0bab65c4eb15-kube-api-access-c48bt\") on node \"localhost\" DevicePath \"\"" Sep 9 00:33:11.805482 kubelet[2703]: I0909 00:33:11.805272 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2pdlc" podStartSLOduration=1.849068204 podStartE2EDuration="16.805249699s" podCreationTimestamp="2025-09-09 00:32:55 +0000 UTC" firstStartedPulling="2025-09-09 00:32:55.640134322 +0000 UTC m=+17.225518025" lastFinishedPulling="2025-09-09 00:33:10.596315807 +0000 UTC m=+32.181699520" observedRunningTime="2025-09-09 00:33:11.804664983 +0000 UTC m=+33.390048696" watchObservedRunningTime="2025-09-09 00:33:11.805249699 +0000 UTC m=+33.390633392" Sep 9 00:33:12.522853 kubelet[2703]: I0909 00:33:12.522761 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/419e5ab9-5b8a-4195-94cd-8d10e989cc2c-whisker-backend-key-pair\") pod \"whisker-56b66b49b9-ts5r5\" (UID: \"419e5ab9-5b8a-4195-94cd-8d10e989cc2c\") " pod="calico-system/whisker-56b66b49b9-ts5r5" Sep 9 00:33:12.522853 kubelet[2703]: I0909 00:33:12.522820 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtr2l\" (UniqueName: \"kubernetes.io/projected/419e5ab9-5b8a-4195-94cd-8d10e989cc2c-kube-api-access-xtr2l\") pod \"whisker-56b66b49b9-ts5r5\" (UID: \"419e5ab9-5b8a-4195-94cd-8d10e989cc2c\") " pod="calico-system/whisker-56b66b49b9-ts5r5" Sep 9 00:33:12.522853 kubelet[2703]: I0909 00:33:12.522844 2703 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/419e5ab9-5b8a-4195-94cd-8d10e989cc2c-whisker-ca-bundle\") pod \"whisker-56b66b49b9-ts5r5\" (UID: \"419e5ab9-5b8a-4195-94cd-8d10e989cc2c\") " pod="calico-system/whisker-56b66b49b9-ts5r5" Sep 9 00:33:12.525937 kubelet[2703]: I0909 00:33:12.525900 2703 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79dee799-ef56-4018-b0a2-0bab65c4eb15" path="/var/lib/kubelet/pods/79dee799-ef56-4018-b0a2-0bab65c4eb15/volumes" Sep 9 00:33:12.758447 kernel: bpftool[4116]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 9 00:33:13.009238 containerd[1592]: time="2025-09-09T00:33:13.009152894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56b66b49b9-ts5r5,Uid:419e5ab9-5b8a-4195-94cd-8d10e989cc2c,Namespace:calico-system,Attempt:0,}" Sep 9 00:33:13.016053 systemd-networkd[1247]: vxlan.calico: Link UP Sep 9 00:33:13.016067 systemd-networkd[1247]: vxlan.calico: Gained carrier Sep 9 00:33:13.260900 systemd-networkd[1247]: cali0633a4bf414: Link UP Sep 9 00:33:13.262165 systemd-networkd[1247]: cali0633a4bf414: Gained carrier Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.193 [INFO][4158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--56b66b49b9--ts5r5-eth0 whisker-56b66b49b9- calico-system 419e5ab9-5b8a-4195-94cd-8d10e989cc2c 940 0 2025-09-09 00:33:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56b66b49b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-56b66b49b9-ts5r5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0633a4bf414 [] [] }} ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Namespace="calico-system" Pod="whisker-56b66b49b9-ts5r5" WorkloadEndpoint="localhost-k8s-whisker--56b66b49b9--ts5r5-" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.194 [INFO][4158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Namespace="calico-system" Pod="whisker-56b66b49b9-ts5r5" WorkloadEndpoint="localhost-k8s-whisker--56b66b49b9--ts5r5-eth0" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.223 [INFO][4174] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" HandleID="k8s-pod-network.7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Workload="localhost-k8s-whisker--56b66b49b9--ts5r5-eth0" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.223 [INFO][4174] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" HandleID="k8s-pod-network.7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Workload="localhost-k8s-whisker--56b66b49b9--ts5r5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000119e80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-56b66b49b9-ts5r5", "timestamp":"2025-09-09 00:33:13.223665365 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.223 [INFO][4174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.223 [INFO][4174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.224 [INFO][4174] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.232 [INFO][4174] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" host="localhost" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.237 [INFO][4174] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.241 [INFO][4174] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.243 [INFO][4174] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.245 [INFO][4174] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.245 [INFO][4174] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" host="localhost" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.246 [INFO][4174] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.251 [INFO][4174] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" host="localhost" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.255 [INFO][4174] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" host="localhost" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.255 [INFO][4174] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" host="localhost" Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.255 [INFO][4174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:13.281291 containerd[1592]: 2025-09-09 00:33:13.255 [INFO][4174] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" HandleID="k8s-pod-network.7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Workload="localhost-k8s-whisker--56b66b49b9--ts5r5-eth0" Sep 9 00:33:13.281861 containerd[1592]: 2025-09-09 00:33:13.258 [INFO][4158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Namespace="calico-system" Pod="whisker-56b66b49b9-ts5r5" WorkloadEndpoint="localhost-k8s-whisker--56b66b49b9--ts5r5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--56b66b49b9--ts5r5-eth0", GenerateName:"whisker-56b66b49b9-", Namespace:"calico-system", SelfLink:"", UID:"419e5ab9-5b8a-4195-94cd-8d10e989cc2c", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56b66b49b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-56b66b49b9-ts5r5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0633a4bf414", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:13.281861 containerd[1592]: 2025-09-09 00:33:13.259 [INFO][4158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Namespace="calico-system" Pod="whisker-56b66b49b9-ts5r5" WorkloadEndpoint="localhost-k8s-whisker--56b66b49b9--ts5r5-eth0" Sep 9 00:33:13.281861 containerd[1592]: 2025-09-09 00:33:13.259 [INFO][4158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0633a4bf414 ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Namespace="calico-system" Pod="whisker-56b66b49b9-ts5r5" WorkloadEndpoint="localhost-k8s-whisker--56b66b49b9--ts5r5-eth0" Sep 9 00:33:13.281861 containerd[1592]: 2025-09-09 00:33:13.261 [INFO][4158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Namespace="calico-system" Pod="whisker-56b66b49b9-ts5r5" WorkloadEndpoint="localhost-k8s-whisker--56b66b49b9--ts5r5-eth0" Sep 9 00:33:13.281861 containerd[1592]: 2025-09-09 00:33:13.263 [INFO][4158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Namespace="calico-system" Pod="whisker-56b66b49b9-ts5r5" WorkloadEndpoint="localhost-k8s-whisker--56b66b49b9--ts5r5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--56b66b49b9--ts5r5-eth0", GenerateName:"whisker-56b66b49b9-", Namespace:"calico-system", SelfLink:"", UID:"419e5ab9-5b8a-4195-94cd-8d10e989cc2c", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 33, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56b66b49b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec", Pod:"whisker-56b66b49b9-ts5r5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0633a4bf414", MAC:"f2:55:e9:85:47:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:13.281861 containerd[1592]: 2025-09-09 00:33:13.275 [INFO][4158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec" Namespace="calico-system" Pod="whisker-56b66b49b9-ts5r5" WorkloadEndpoint="localhost-k8s-whisker--56b66b49b9--ts5r5-eth0" Sep 9 00:33:13.313146 containerd[1592]: time="2025-09-09T00:33:13.312989774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:13.313146 containerd[1592]: time="2025-09-09T00:33:13.313051736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:13.313146 containerd[1592]: time="2025-09-09T00:33:13.313068968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:13.313519 containerd[1592]: time="2025-09-09T00:33:13.313404285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:13.348138 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:13.383136 containerd[1592]: time="2025-09-09T00:33:13.383089499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56b66b49b9-ts5r5,Uid:419e5ab9-5b8a-4195-94cd-8d10e989cc2c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec\"" Sep 9 00:33:13.384787 containerd[1592]: time="2025-09-09T00:33:13.384758223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:33:14.415650 systemd-networkd[1247]: cali0633a4bf414: Gained IPv6LL Sep 9 00:33:14.837101 containerd[1592]: time="2025-09-09T00:33:14.837034659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:14.838748 containerd[1592]: time="2025-09-09T00:33:14.838686470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:33:14.839919 containerd[1592]: time="2025-09-09T00:33:14.839872275Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:14.842964 containerd[1592]: time="2025-09-09T00:33:14.842916838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:14.843925 containerd[1592]: time="2025-09-09T00:33:14.843881911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.459082153s" Sep 9 00:33:14.843999 containerd[1592]: time="2025-09-09T00:33:14.843924539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:33:14.846011 containerd[1592]: time="2025-09-09T00:33:14.845981366Z" level=info msg="CreateContainer within sandbox \"7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:33:14.862822 containerd[1592]: time="2025-09-09T00:33:14.862769322Z" level=info msg="CreateContainer within sandbox \"7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"538e6f7392adf749781a3bbedf5847223ed0b6c757c861e115cad283702e8a83\"" Sep 9 00:33:14.863392 containerd[1592]: time="2025-09-09T00:33:14.863360466Z" level=info msg="StartContainer for \"538e6f7392adf749781a3bbedf5847223ed0b6c757c861e115cad283702e8a83\"" Sep 9 00:33:14.927752 systemd-networkd[1247]: vxlan.calico: Gained IPv6LL Sep 9 00:33:14.944782 containerd[1592]: time="2025-09-09T00:33:14.944715028Z" level=info msg="StartContainer for \"538e6f7392adf749781a3bbedf5847223ed0b6c757c861e115cad283702e8a83\" returns successfully" Sep 9 00:33:14.946134 containerd[1592]: time="2025-09-09T00:33:14.946057227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:33:16.523450 containerd[1592]: time="2025-09-09T00:33:16.523168039Z" level=info msg="StopPodSandbox for \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\"" Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.569 [INFO][4323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.569 [INFO][4323] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" iface="eth0" netns="/var/run/netns/cni-4a26755d-0c3a-dd4a-565c-39fd875a480f" Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.570 [INFO][4323] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" iface="eth0" netns="/var/run/netns/cni-4a26755d-0c3a-dd4a-565c-39fd875a480f" Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.570 [INFO][4323] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" iface="eth0" netns="/var/run/netns/cni-4a26755d-0c3a-dd4a-565c-39fd875a480f" Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.570 [INFO][4323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.570 [INFO][4323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.594 [INFO][4332] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" HandleID="k8s-pod-network.41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.594 [INFO][4332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.594 [INFO][4332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.602 [WARNING][4332] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" HandleID="k8s-pod-network.41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.602 [INFO][4332] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" HandleID="k8s-pod-network.41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.603 [INFO][4332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:16.609494 containerd[1592]: 2025-09-09 00:33:16.606 [INFO][4323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:16.611086 containerd[1592]: time="2025-09-09T00:33:16.609674108Z" level=info msg="TearDown network for sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\" successfully" Sep 9 00:33:16.611086 containerd[1592]: time="2025-09-09T00:33:16.609704663Z" level=info msg="StopPodSandbox for \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\" returns successfully" Sep 9 00:33:16.611086 containerd[1592]: time="2025-09-09T00:33:16.610690854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-csr6g,Uid:1a0e64da-2e8f-4229-9092-4e3f71b7565b,Namespace:calico-system,Attempt:1,}" Sep 9 00:33:16.613354 systemd[1]: run-netns-cni\x2d4a26755d\x2d0c3a\x2ddd4a\x2d565c\x2d39fd875a480f.mount: Deactivated successfully. Sep 9 00:33:17.073237 systemd-networkd[1247]: cali7cd6eb22e43: Link UP Sep 9 00:33:17.073721 systemd-networkd[1247]: cali7cd6eb22e43: Gained carrier Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.010 [INFO][4340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--csr6g-eth0 goldmane-7988f88666- calico-system 1a0e64da-2e8f-4229-9092-4e3f71b7565b 955 0 2025-09-09 00:32:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-csr6g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7cd6eb22e43 [] [] }} ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Namespace="calico-system" Pod="goldmane-7988f88666-csr6g" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--csr6g-" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.011 [INFO][4340] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Namespace="calico-system" Pod="goldmane-7988f88666-csr6g" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.034 [INFO][4353] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" HandleID="k8s-pod-network.b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.034 [INFO][4353] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" HandleID="k8s-pod-network.b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005031a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-csr6g", "timestamp":"2025-09-09 00:33:17.03468305 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.034 [INFO][4353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.034 [INFO][4353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.034 [INFO][4353] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.040 [INFO][4353] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" host="localhost" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.047 [INFO][4353] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.052 [INFO][4353] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.053 [INFO][4353] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.055 [INFO][4353] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.056 [INFO][4353] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" host="localhost" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.057 [INFO][4353] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77 Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.063 [INFO][4353] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" host="localhost" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.068 [INFO][4353] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" host="localhost" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.068 [INFO][4353] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" host="localhost" Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.068 [INFO][4353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:17.095251 containerd[1592]: 2025-09-09 00:33:17.068 [INFO][4353] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" HandleID="k8s-pod-network.b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:17.095885 containerd[1592]: 2025-09-09 00:33:17.071 [INFO][4340] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Namespace="calico-system" Pod="goldmane-7988f88666-csr6g" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--csr6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--csr6g-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"1a0e64da-2e8f-4229-9092-4e3f71b7565b", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-csr6g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cd6eb22e43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:17.095885 containerd[1592]: 2025-09-09 00:33:17.071 [INFO][4340] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Namespace="calico-system" Pod="goldmane-7988f88666-csr6g" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:17.095885 containerd[1592]: 2025-09-09 00:33:17.071 [INFO][4340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cd6eb22e43 ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Namespace="calico-system" Pod="goldmane-7988f88666-csr6g" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:17.095885 containerd[1592]: 2025-09-09 00:33:17.075 [INFO][4340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Namespace="calico-system" Pod="goldmane-7988f88666-csr6g" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:17.095885 containerd[1592]: 2025-09-09 00:33:17.075 [INFO][4340] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Namespace="calico-system" Pod="goldmane-7988f88666-csr6g" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--csr6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--csr6g-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"1a0e64da-2e8f-4229-9092-4e3f71b7565b", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77", Pod:"goldmane-7988f88666-csr6g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cd6eb22e43", MAC:"46:3a:d4:9e:ab:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:17.095885 containerd[1592]: 2025-09-09 00:33:17.087 [INFO][4340] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77" Namespace="calico-system" Pod="goldmane-7988f88666-csr6g" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:17.278328 systemd[1]: Started sshd@7-10.0.0.108:22-10.0.0.1:41240.service - OpenSSH per-connection server daemon (10.0.0.1:41240). Sep 9 00:33:17.288006 containerd[1592]: time="2025-09-09T00:33:17.287922979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:17.288236 containerd[1592]: time="2025-09-09T00:33:17.287973612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:17.288236 containerd[1592]: time="2025-09-09T00:33:17.287992927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:17.288236 containerd[1592]: time="2025-09-09T00:33:17.288105944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:17.321285 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 41240 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:33:17.321242 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:17.323670 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:17.329244 systemd-logind[1548]: New session 8 of user core. Sep 9 00:33:17.335793 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:33:17.360299 containerd[1592]: time="2025-09-09T00:33:17.360233372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-csr6g,Uid:1a0e64da-2e8f-4229-9092-4e3f71b7565b,Namespace:calico-system,Attempt:1,} returns sandbox id \"b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77\"" Sep 9 00:33:17.496208 sshd[4386]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:17.500504 systemd[1]: sshd@7-10.0.0.108:22-10.0.0.1:41240.service: Deactivated successfully. Sep 9 00:33:17.503816 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:33:17.504202 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:33:17.505651 systemd-logind[1548]: Removed session 8. Sep 9 00:33:17.524728 containerd[1592]: time="2025-09-09T00:33:17.524128343Z" level=info msg="StopPodSandbox for \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\"" Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.565 [INFO][4447] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.565 [INFO][4447] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" iface="eth0" netns="/var/run/netns/cni-6befe520-8c59-f48d-e93f-8771dc0dc483" Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.566 [INFO][4447] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" iface="eth0" netns="/var/run/netns/cni-6befe520-8c59-f48d-e93f-8771dc0dc483" Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.566 [INFO][4447] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" iface="eth0" netns="/var/run/netns/cni-6befe520-8c59-f48d-e93f-8771dc0dc483" Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.566 [INFO][4447] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.566 [INFO][4447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.595 [INFO][4456] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" HandleID="k8s-pod-network.31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.596 [INFO][4456] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.596 [INFO][4456] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.601 [WARNING][4456] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" HandleID="k8s-pod-network.31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.601 [INFO][4456] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" HandleID="k8s-pod-network.31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.602 [INFO][4456] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:17.608018 containerd[1592]: 2025-09-09 00:33:17.605 [INFO][4447] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:17.608481 containerd[1592]: time="2025-09-09T00:33:17.608201353Z" level=info msg="TearDown network for sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\" successfully" Sep 9 00:33:17.608481 containerd[1592]: time="2025-09-09T00:33:17.608283894Z" level=info msg="StopPodSandbox for \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\" returns successfully" Sep 9 00:33:17.609353 containerd[1592]: time="2025-09-09T00:33:17.609307558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvfmz,Uid:45baac1d-c9f0-4704-a887-7b015b292f0b,Namespace:calico-system,Attempt:1,}" Sep 9 00:33:17.616147 systemd[1]: run-netns-cni\x2d6befe520\x2d8c59\x2df48d\x2de93f\x2d8771dc0dc483.mount: Deactivated successfully. Sep 9 00:33:17.792647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206265541.mount: Deactivated successfully. Sep 9 00:33:17.799436 systemd-networkd[1247]: calibbbcfddec73: Link UP Sep 9 00:33:17.799668 systemd-networkd[1247]: calibbbcfddec73: Gained carrier Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.725 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dvfmz-eth0 csi-node-driver- calico-system 45baac1d-c9f0-4704-a887-7b015b292f0b 995 0 2025-09-09 00:32:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dvfmz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibbbcfddec73 [] [] }} ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Namespace="calico-system" Pod="csi-node-driver-dvfmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvfmz-" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.726 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Namespace="calico-system" Pod="csi-node-driver-dvfmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.755 [INFO][4477] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" HandleID="k8s-pod-network.3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.756 [INFO][4477] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" HandleID="k8s-pod-network.3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dvfmz", "timestamp":"2025-09-09 00:33:17.755210069 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.757 [INFO][4477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.757 [INFO][4477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.757 [INFO][4477] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.764 [INFO][4477] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" host="localhost" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.770 [INFO][4477] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.774 [INFO][4477] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.776 [INFO][4477] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.778 [INFO][4477] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.778 [INFO][4477] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" host="localhost" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.779 [INFO][4477] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.783 [INFO][4477] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" host="localhost" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.790 [INFO][4477] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" host="localhost" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.790 [INFO][4477] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" host="localhost" Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.790 [INFO][4477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:18.014004 containerd[1592]: 2025-09-09 00:33:17.790 [INFO][4477] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" HandleID="k8s-pod-network.3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:18.014767 containerd[1592]: 2025-09-09 00:33:17.796 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Namespace="calico-system" Pod="csi-node-driver-dvfmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvfmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvfmz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45baac1d-c9f0-4704-a887-7b015b292f0b", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dvfmz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbbcfddec73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:18.014767 containerd[1592]: 2025-09-09 00:33:17.796 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Namespace="calico-system" Pod="csi-node-driver-dvfmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:18.014767 containerd[1592]: 2025-09-09 00:33:17.796 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbbcfddec73 ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Namespace="calico-system" Pod="csi-node-driver-dvfmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:18.014767 containerd[1592]: 2025-09-09 00:33:17.798 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Namespace="calico-system" Pod="csi-node-driver-dvfmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:18.014767 containerd[1592]: 2025-09-09 00:33:17.799 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Namespace="calico-system" Pod="csi-node-driver-dvfmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvfmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvfmz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45baac1d-c9f0-4704-a887-7b015b292f0b", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d", Pod:"csi-node-driver-dvfmz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbbcfddec73", MAC:"86:bb:01:6a:b5:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:18.014767 containerd[1592]: 2025-09-09 00:33:18.011 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d" Namespace="calico-system" Pod="csi-node-driver-dvfmz" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:18.034894 containerd[1592]: time="2025-09-09T00:33:18.034142935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:18.034894 containerd[1592]: time="2025-09-09T00:33:18.034876059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:18.035113 containerd[1592]: time="2025-09-09T00:33:18.034892751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:18.035113 containerd[1592]: time="2025-09-09T00:33:18.035048746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:18.041343 containerd[1592]: time="2025-09-09T00:33:18.041294623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:18.042232 containerd[1592]: time="2025-09-09T00:33:18.042196187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:33:18.043365 containerd[1592]: time="2025-09-09T00:33:18.043318506Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:18.045498 containerd[1592]: time="2025-09-09T00:33:18.045463751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:18.046181 containerd[1592]: time="2025-09-09T00:33:18.046137476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.099873564s" Sep 9 00:33:18.047969 containerd[1592]: time="2025-09-09T00:33:18.046172391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:33:18.071930 containerd[1592]: time="2025-09-09T00:33:18.070812107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:33:18.071944 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:18.072375 containerd[1592]: time="2025-09-09T00:33:18.072282474Z" level=info msg="CreateContainer within sandbox \"7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:33:18.085894 containerd[1592]: time="2025-09-09T00:33:18.085830896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvfmz,Uid:45baac1d-c9f0-4704-a887-7b015b292f0b,Namespace:calico-system,Attempt:1,} returns sandbox id \"3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d\"" Sep 9 00:33:18.094201 containerd[1592]: time="2025-09-09T00:33:18.094133836Z" level=info msg="CreateContainer within sandbox \"7d704a29111e8d2dbbf8bb5e1c2b62005c81b17f172326aceea906596f3c18ec\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1568dbd53c1dd0783607064b53caa1fcc37a3e4d4c3a1aa731df097f0f3383e2\"" Sep 9 00:33:18.094808 containerd[1592]: time="2025-09-09T00:33:18.094764392Z" level=info msg="StartContainer for \"1568dbd53c1dd0783607064b53caa1fcc37a3e4d4c3a1aa731df097f0f3383e2\"" Sep 9 00:33:18.162394 containerd[1592]: time="2025-09-09T00:33:18.162351667Z" level=info msg="StartContainer for \"1568dbd53c1dd0783607064b53caa1fcc37a3e4d4c3a1aa731df097f0f3383e2\" returns successfully" Sep 9 00:33:18.511643 systemd-networkd[1247]: cali7cd6eb22e43: Gained IPv6LL Sep 9 00:33:18.523957 containerd[1592]: time="2025-09-09T00:33:18.523728727Z" level=info msg="StopPodSandbox for \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\"" Sep 9 00:33:18.523957 containerd[1592]: time="2025-09-09T00:33:18.523745488Z" level=info msg="StopPodSandbox for \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\"" Sep 9 00:33:18.524161 containerd[1592]: time="2025-09-09T00:33:18.523728737Z" level=info msg="StopPodSandbox for \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\"" Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.589 [INFO][4627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.590 [INFO][4627] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" iface="eth0" netns="/var/run/netns/cni-ceb76dda-cef2-20bf-19e6-98b215b40fba" Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.590 [INFO][4627] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" iface="eth0" netns="/var/run/netns/cni-ceb76dda-cef2-20bf-19e6-98b215b40fba" Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.590 [INFO][4627] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" iface="eth0" netns="/var/run/netns/cni-ceb76dda-cef2-20bf-19e6-98b215b40fba" Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.590 [INFO][4627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.590 [INFO][4627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.629 [INFO][4640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" HandleID="k8s-pod-network.dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.629 [INFO][4640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.629 [INFO][4640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.636 [WARNING][4640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" HandleID="k8s-pod-network.dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.636 [INFO][4640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" HandleID="k8s-pod-network.dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.637 [INFO][4640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:18.643357 containerd[1592]: 2025-09-09 00:33:18.640 [INFO][4627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:18.645838 containerd[1592]: time="2025-09-09T00:33:18.644130729Z" level=info msg="TearDown network for sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\" successfully" Sep 9 00:33:18.645838 containerd[1592]: time="2025-09-09T00:33:18.644203573Z" level=info msg="StopPodSandbox for \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\" returns successfully" Sep 9 00:33:18.645838 containerd[1592]: time="2025-09-09T00:33:18.644995005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55bfc5d889-9l6s4,Uid:25010456-202e-41ea-aa9f-fe497ae64e66,Namespace:calico-system,Attempt:1,}" Sep 9 00:33:18.647629 systemd[1]: run-netns-cni\x2dceb76dda\x2dcef2\x2d20bf\x2d19e6\x2d98b215b40fba.mount: Deactivated successfully. Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.597 [INFO][4616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.597 [INFO][4616] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" iface="eth0" netns="/var/run/netns/cni-4d886c68-bf9a-eaf1-8274-4311766aac94" Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.598 [INFO][4616] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" iface="eth0" netns="/var/run/netns/cni-4d886c68-bf9a-eaf1-8274-4311766aac94" Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.598 [INFO][4616] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" iface="eth0" netns="/var/run/netns/cni-4d886c68-bf9a-eaf1-8274-4311766aac94" Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.598 [INFO][4616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.598 [INFO][4616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.632 [INFO][4654] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" HandleID="k8s-pod-network.99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.632 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.637 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.641 [WARNING][4654] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" HandleID="k8s-pod-network.99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.641 [INFO][4654] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" HandleID="k8s-pod-network.99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.642 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:18.651302 containerd[1592]: 2025-09-09 00:33:18.648 [INFO][4616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:18.651701 containerd[1592]: time="2025-09-09T00:33:18.651574043Z" level=info msg="TearDown network for sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\" successfully" Sep 9 00:33:18.651701 containerd[1592]: time="2025-09-09T00:33:18.651600381Z" level=info msg="StopPodSandbox for \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\" returns successfully" Sep 9 00:33:18.653889 kubelet[2703]: E0909 00:33:18.653839 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:18.654812 containerd[1592]: time="2025-09-09T00:33:18.654201773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4m9xw,Uid:ec5d7db1-2706-41c7-b992-bd43c3dcfac0,Namespace:kube-system,Attempt:1,}" Sep 9 00:33:18.656559 systemd[1]: run-netns-cni\x2d4d886c68\x2dbf9a\x2deaf1\x2d8274\x2d4311766aac94.mount: Deactivated successfully. Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.589 [INFO][4617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.589 [INFO][4617] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" iface="eth0" netns="/var/run/netns/cni-e539b08d-9370-3252-b915-b07aab546520" Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.593 [INFO][4617] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" iface="eth0" netns="/var/run/netns/cni-e539b08d-9370-3252-b915-b07aab546520" Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.594 [INFO][4617] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" iface="eth0" netns="/var/run/netns/cni-e539b08d-9370-3252-b915-b07aab546520" Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.595 [INFO][4617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.595 [INFO][4617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.634 [INFO][4643] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" HandleID="k8s-pod-network.d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.634 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.642 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.650 [WARNING][4643] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" HandleID="k8s-pod-network.d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.650 [INFO][4643] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" HandleID="k8s-pod-network.d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.651 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:18.659789 containerd[1592]: 2025-09-09 00:33:18.655 [INFO][4617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:18.660362 containerd[1592]: time="2025-09-09T00:33:18.659924960Z" level=info msg="TearDown network for sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\" successfully" Sep 9 00:33:18.660362 containerd[1592]: time="2025-09-09T00:33:18.659947211Z" level=info msg="StopPodSandbox for \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\" returns successfully" Sep 9 00:33:18.660815 containerd[1592]: time="2025-09-09T00:33:18.660634592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6544c75f8-n8q9k,Uid:cba14676-c2d2-4393-a6fd-b4ef0dc67fba,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:33:18.666079 systemd[1]: run-netns-cni\x2de539b08d\x2d9370\x2d3252\x2db915\x2db07aab546520.mount: Deactivated successfully. Sep 9 00:33:18.825495 systemd-networkd[1247]: calie5ed1746639: Link UP Sep 9 00:33:18.825933 systemd-networkd[1247]: calie5ed1746639: Gained carrier Sep 9 00:33:18.838270 kubelet[2703]: I0909 00:33:18.838182 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-56b66b49b9-ts5r5" podStartSLOduration=2.153619593 podStartE2EDuration="6.83815745s" podCreationTimestamp="2025-09-09 00:33:12 +0000 UTC" firstStartedPulling="2025-09-09 00:33:13.384462046 +0000 UTC m=+34.969845749" lastFinishedPulling="2025-09-09 00:33:18.068999903 +0000 UTC m=+39.654383606" observedRunningTime="2025-09-09 00:33:18.761592318 +0000 UTC m=+40.346976021" watchObservedRunningTime="2025-09-09 00:33:18.83815745 +0000 UTC m=+40.423541153" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.699 [INFO][4666] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0 calico-kube-controllers-55bfc5d889- calico-system 25010456-202e-41ea-aa9f-fe497ae64e66 1010 0 2025-09-09 00:32:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55bfc5d889 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55bfc5d889-9l6s4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie5ed1746639 [] [] }} ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Namespace="calico-system" Pod="calico-kube-controllers-55bfc5d889-9l6s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.699 [INFO][4666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Namespace="calico-system" Pod="calico-kube-controllers-55bfc5d889-9l6s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.727 [INFO][4704] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" HandleID="k8s-pod-network.72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.727 [INFO][4704] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" HandleID="k8s-pod-network.72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55bfc5d889-9l6s4", "timestamp":"2025-09-09 00:33:18.727374943 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.727 [INFO][4704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.727 [INFO][4704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.727 [INFO][4704] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.734 [INFO][4704] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" host="localhost" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.739 [INFO][4704] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.748 [INFO][4704] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.750 [INFO][4704] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.759 [INFO][4704] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.763 [INFO][4704] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" host="localhost" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.780 [INFO][4704] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80 Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.801 [INFO][4704] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" host="localhost" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.813 [INFO][4704] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" host="localhost" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.814 [INFO][4704] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" host="localhost" Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.814 [INFO][4704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:18.846284 containerd[1592]: 2025-09-09 00:33:18.814 [INFO][4704] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" HandleID="k8s-pod-network.72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:18.847196 containerd[1592]: 2025-09-09 00:33:18.819 [INFO][4666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Namespace="calico-system" Pod="calico-kube-controllers-55bfc5d889-9l6s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0", GenerateName:"calico-kube-controllers-55bfc5d889-", Namespace:"calico-system", SelfLink:"", UID:"25010456-202e-41ea-aa9f-fe497ae64e66", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55bfc5d889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55bfc5d889-9l6s4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie5ed1746639", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:18.847196 containerd[1592]: 2025-09-09 00:33:18.819 [INFO][4666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Namespace="calico-system" Pod="calico-kube-controllers-55bfc5d889-9l6s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:18.847196 containerd[1592]: 2025-09-09 00:33:18.819 [INFO][4666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5ed1746639 ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Namespace="calico-system" Pod="calico-kube-controllers-55bfc5d889-9l6s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:18.847196 containerd[1592]: 2025-09-09 00:33:18.826 [INFO][4666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Namespace="calico-system" Pod="calico-kube-controllers-55bfc5d889-9l6s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:18.847196 containerd[1592]: 2025-09-09 00:33:18.826 [INFO][4666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Namespace="calico-system" Pod="calico-kube-controllers-55bfc5d889-9l6s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0", GenerateName:"calico-kube-controllers-55bfc5d889-", Namespace:"calico-system", SelfLink:"", UID:"25010456-202e-41ea-aa9f-fe497ae64e66", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55bfc5d889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80", Pod:"calico-kube-controllers-55bfc5d889-9l6s4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie5ed1746639", MAC:"b2:2d:42:d9:ba:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:18.847196 containerd[1592]: 2025-09-09 00:33:18.837 [INFO][4666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80" Namespace="calico-system" Pod="calico-kube-controllers-55bfc5d889-9l6s4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:18.885704 containerd[1592]: time="2025-09-09T00:33:18.885393485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:18.885704 containerd[1592]: time="2025-09-09T00:33:18.885477609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:18.885704 containerd[1592]: time="2025-09-09T00:33:18.885489131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:18.885704 containerd[1592]: time="2025-09-09T00:33:18.885598441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:18.889132 systemd-networkd[1247]: calid2ae74cd4a7: Link UP Sep 9 00:33:18.889755 systemd-networkd[1247]: calid2ae74cd4a7: Gained carrier Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.737 [INFO][4678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0 coredns-7c65d6cfc9- kube-system ec5d7db1-2706-41c7-b992-bd43c3dcfac0 1012 0 2025-09-09 00:32:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-4m9xw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2ae74cd4a7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4m9xw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4m9xw-" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.737 [INFO][4678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4m9xw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.808 [INFO][4716] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" HandleID="k8s-pod-network.b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.808 [INFO][4716] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" HandleID="k8s-pod-network.b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-4m9xw", "timestamp":"2025-09-09 00:33:18.808445089 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.811 [INFO][4716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.814 [INFO][4716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.814 [INFO][4716] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.835 [INFO][4716] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" host="localhost" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.842 [INFO][4716] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.856 [INFO][4716] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.858 [INFO][4716] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.860 [INFO][4716] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.861 [INFO][4716] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" host="localhost" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.863 [INFO][4716] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33 Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.867 [INFO][4716] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" host="localhost" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.874 [INFO][4716] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" host="localhost" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.874 [INFO][4716] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" host="localhost" Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.874 [INFO][4716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:18.905731 containerd[1592]: 2025-09-09 00:33:18.874 [INFO][4716] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" HandleID="k8s-pod-network.b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:18.906281 containerd[1592]: 2025-09-09 00:33:18.881 [INFO][4678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4m9xw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ec5d7db1-2706-41c7-b992-bd43c3dcfac0", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-4m9xw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2ae74cd4a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:18.906281 containerd[1592]: 2025-09-09 00:33:18.882 [INFO][4678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4m9xw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:18.906281 containerd[1592]: 2025-09-09 00:33:18.882 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2ae74cd4a7 ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4m9xw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:18.906281 containerd[1592]: 2025-09-09 00:33:18.889 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4m9xw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:18.906281 containerd[1592]: 2025-09-09 00:33:18.889 [INFO][4678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4m9xw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ec5d7db1-2706-41c7-b992-bd43c3dcfac0", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33", Pod:"coredns-7c65d6cfc9-4m9xw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2ae74cd4a7", MAC:"3a:63:61:50:bd:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:18.906281 containerd[1592]: 2025-09-09 00:33:18.902 [INFO][4678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4m9xw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:18.918915 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:18.930138 containerd[1592]: time="2025-09-09T00:33:18.930012199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:18.930138 containerd[1592]: time="2025-09-09T00:33:18.930103506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:18.932713 containerd[1592]: time="2025-09-09T00:33:18.930135325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:18.932842 containerd[1592]: time="2025-09-09T00:33:18.932802687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:18.966793 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:18.968610 containerd[1592]: time="2025-09-09T00:33:18.967946411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55bfc5d889-9l6s4,Uid:25010456-202e-41ea-aa9f-fe497ae64e66,Namespace:calico-system,Attempt:1,} returns sandbox id \"72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80\"" Sep 9 00:33:18.978862 systemd-networkd[1247]: cali80856d16d3e: Link UP Sep 9 00:33:18.979302 systemd-networkd[1247]: cali80856d16d3e: Gained carrier Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.788 [INFO][4689] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0 calico-apiserver-6544c75f8- calico-apiserver cba14676-c2d2-4393-a6fd-b4ef0dc67fba 1009 0 2025-09-09 00:32:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6544c75f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6544c75f8-n8q9k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali80856d16d3e [] [] }} ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-n8q9k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.789 [INFO][4689] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-n8q9k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.859 [INFO][4730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" HandleID="k8s-pod-network.7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.860 [INFO][4730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" HandleID="k8s-pod-network.7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6544c75f8-n8q9k", "timestamp":"2025-09-09 00:33:18.859846444 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.860 [INFO][4730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.874 [INFO][4730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.874 [INFO][4730] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.936 [INFO][4730] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" host="localhost" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.942 [INFO][4730] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.948 [INFO][4730] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.950 [INFO][4730] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.953 [INFO][4730] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.953 [INFO][4730] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" host="localhost" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.954 [INFO][4730] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.960 [INFO][4730] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" host="localhost" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.967 [INFO][4730] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" host="localhost" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.967 [INFO][4730] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" host="localhost" Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.967 [INFO][4730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:18.996694 containerd[1592]: 2025-09-09 00:33:18.967 [INFO][4730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" HandleID="k8s-pod-network.7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:18.997992 containerd[1592]: 2025-09-09 00:33:18.972 [INFO][4689] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-n8q9k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0", GenerateName:"calico-apiserver-6544c75f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"cba14676-c2d2-4393-a6fd-b4ef0dc67fba", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6544c75f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6544c75f8-n8q9k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80856d16d3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:18.997992 containerd[1592]: 2025-09-09 00:33:18.973 [INFO][4689] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-n8q9k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:18.997992 containerd[1592]: 2025-09-09 00:33:18.973 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80856d16d3e ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-n8q9k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:18.997992 containerd[1592]: 2025-09-09 00:33:18.979 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-n8q9k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:18.997992 containerd[1592]: 2025-09-09 00:33:18.980 [INFO][4689] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-n8q9k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0", GenerateName:"calico-apiserver-6544c75f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"cba14676-c2d2-4393-a6fd-b4ef0dc67fba", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6544c75f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa", Pod:"calico-apiserver-6544c75f8-n8q9k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80856d16d3e", MAC:"56:ae:c2:18:cf:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:18.997992 containerd[1592]: 2025-09-09 00:33:18.990 [INFO][4689] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-n8q9k" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:19.001567 containerd[1592]: time="2025-09-09T00:33:19.001539635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4m9xw,Uid:ec5d7db1-2706-41c7-b992-bd43c3dcfac0,Namespace:kube-system,Attempt:1,} returns sandbox id \"b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33\"" Sep 9 00:33:19.002481 kubelet[2703]: E0909 00:33:19.002451 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:19.004945 containerd[1592]: time="2025-09-09T00:33:19.004890364Z" level=info msg="CreateContainer within sandbox \"b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:33:19.020182 containerd[1592]: time="2025-09-09T00:33:19.020064849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:19.020182 containerd[1592]: time="2025-09-09T00:33:19.020138625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:19.020182 containerd[1592]: time="2025-09-09T00:33:19.020150667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:19.020367 containerd[1592]: time="2025-09-09T00:33:19.020294531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:19.021571 containerd[1592]: time="2025-09-09T00:33:19.021248134Z" level=info msg="CreateContainer within sandbox \"b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6d32a92b5eefd9abe99ef457779d7523de9a19446e7a0982292ce40e9d0caa8\"" Sep 9 00:33:19.023533 containerd[1592]: time="2025-09-09T00:33:19.022164559Z" level=info msg="StartContainer for \"c6d32a92b5eefd9abe99ef457779d7523de9a19446e7a0982292ce40e9d0caa8\"" Sep 9 00:33:19.048226 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:19.080537 containerd[1592]: time="2025-09-09T00:33:19.080280792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6544c75f8-n8q9k,Uid:cba14676-c2d2-4393-a6fd-b4ef0dc67fba,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa\"" Sep 9 00:33:19.103734 containerd[1592]: time="2025-09-09T00:33:19.103680525Z" level=info msg="StartContainer for \"c6d32a92b5eefd9abe99ef457779d7523de9a19446e7a0982292ce40e9d0caa8\" returns successfully" Sep 9 00:33:19.286543 kubelet[2703]: I0909 00:33:19.285945 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:33:19.408230 systemd-networkd[1247]: calibbbcfddec73: Gained IPv6LL Sep 9 00:33:19.524601 containerd[1592]: time="2025-09-09T00:33:19.523661296Z" level=info msg="StopPodSandbox for \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\"" Sep 9 00:33:19.753323 kubelet[2703]: E0909 00:33:19.753283 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.667 [INFO][4965] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.668 [INFO][4965] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" iface="eth0" netns="/var/run/netns/cni-0ec97db2-8167-3715-868c-2acbdd293045" Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.671 [INFO][4965] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" iface="eth0" netns="/var/run/netns/cni-0ec97db2-8167-3715-868c-2acbdd293045" Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.672 [INFO][4965] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" iface="eth0" netns="/var/run/netns/cni-0ec97db2-8167-3715-868c-2acbdd293045" Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.673 [INFO][4965] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.673 [INFO][4965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.734 [INFO][4976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" HandleID="k8s-pod-network.840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.734 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.735 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.765 [WARNING][4976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" HandleID="k8s-pod-network.840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.765 [INFO][4976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" HandleID="k8s-pod-network.840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.774 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:19.800803 containerd[1592]: 2025-09-09 00:33:19.795 [INFO][4965] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:19.802080 containerd[1592]: time="2025-09-09T00:33:19.801542010Z" level=info msg="TearDown network for sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\" successfully" Sep 9 00:33:19.802080 containerd[1592]: time="2025-09-09T00:33:19.801580050Z" level=info msg="StopPodSandbox for \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\" returns successfully" Sep 9 00:33:19.806330 systemd[1]: run-netns-cni\x2d0ec97db2\x2d8167\x2d3715\x2d868c\x2d2acbdd293045.mount: Deactivated successfully. Sep 9 00:33:19.811390 containerd[1592]: time="2025-09-09T00:33:19.811329181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6544c75f8-qw6bt,Uid:e073184d-6f60-4919-94e6-d04e6ac8bc91,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:33:19.826050 kubelet[2703]: I0909 00:33:19.825312 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4m9xw" podStartSLOduration=37.825286469 podStartE2EDuration="37.825286469s" podCreationTimestamp="2025-09-09 00:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:33:19.825240403 +0000 UTC m=+41.410624106" watchObservedRunningTime="2025-09-09 00:33:19.825286469 +0000 UTC m=+41.410670192" Sep 9 00:33:20.313057 systemd-networkd[1247]: cali2274b1979bf: Link UP Sep 9 00:33:20.320901 systemd-networkd[1247]: cali2274b1979bf: Gained carrier Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:19.974 [INFO][5012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0 calico-apiserver-6544c75f8- calico-apiserver e073184d-6f60-4919-94e6-d04e6ac8bc91 1040 0 2025-09-09 00:32:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6544c75f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6544c75f8-qw6bt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2274b1979bf [] [] }} ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-qw6bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:19.974 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-qw6bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.045 [INFO][5026] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" HandleID="k8s-pod-network.e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.046 [INFO][5026] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" HandleID="k8s-pod-network.e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f630), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6544c75f8-qw6bt", "timestamp":"2025-09-09 00:33:20.04543722 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.046 [INFO][5026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.046 [INFO][5026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.046 [INFO][5026] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.062 [INFO][5026] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" host="localhost" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.074 [INFO][5026] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.165 [INFO][5026] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.168 [INFO][5026] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.173 [INFO][5026] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.174 [INFO][5026] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" host="localhost" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.177 [INFO][5026] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50 Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.209 [INFO][5026] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" host="localhost" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.278 [INFO][5026] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" host="localhost" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.278 [INFO][5026] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" host="localhost" Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.278 [INFO][5026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:20.373037 containerd[1592]: 2025-09-09 00:33:20.278 [INFO][5026] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" HandleID="k8s-pod-network.e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:20.374083 containerd[1592]: 2025-09-09 00:33:20.301 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-qw6bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0", GenerateName:"calico-apiserver-6544c75f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e073184d-6f60-4919-94e6-d04e6ac8bc91", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6544c75f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6544c75f8-qw6bt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2274b1979bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:20.374083 containerd[1592]: 2025-09-09 00:33:20.302 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-qw6bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:20.374083 containerd[1592]: 2025-09-09 00:33:20.302 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2274b1979bf ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-qw6bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:20.374083 containerd[1592]: 2025-09-09 00:33:20.323 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-qw6bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:20.374083 containerd[1592]: 2025-09-09 00:33:20.330 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-qw6bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0", GenerateName:"calico-apiserver-6544c75f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e073184d-6f60-4919-94e6-d04e6ac8bc91", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6544c75f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50", Pod:"calico-apiserver-6544c75f8-qw6bt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2274b1979bf", MAC:"2a:a5:5f:5c:73:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:20.374083 containerd[1592]: 2025-09-09 00:33:20.361 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50" Namespace="calico-apiserver" Pod="calico-apiserver-6544c75f8-qw6bt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:20.437467 containerd[1592]: time="2025-09-09T00:33:20.432960391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:20.437467 containerd[1592]: time="2025-09-09T00:33:20.434896939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:20.437467 containerd[1592]: time="2025-09-09T00:33:20.434925171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:20.437467 containerd[1592]: time="2025-09-09T00:33:20.435155224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:20.520820 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:20.526336 containerd[1592]: time="2025-09-09T00:33:20.526262702Z" level=info msg="StopPodSandbox for \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\"" Sep 9 00:33:20.606509 containerd[1592]: time="2025-09-09T00:33:20.604839017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6544c75f8-qw6bt,Uid:e073184d-6f60-4919-94e6-d04e6ac8bc91,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50\"" Sep 9 00:33:20.629390 systemd-networkd[1247]: calie5ed1746639: Gained IPv6LL Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.659 [INFO][5090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.659 [INFO][5090] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" iface="eth0" netns="/var/run/netns/cni-fcaf664f-aa79-4de4-58b0-595f3986613b" Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.660 [INFO][5090] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" iface="eth0" netns="/var/run/netns/cni-fcaf664f-aa79-4de4-58b0-595f3986613b" Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.660 [INFO][5090] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" iface="eth0" netns="/var/run/netns/cni-fcaf664f-aa79-4de4-58b0-595f3986613b" Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.661 [INFO][5090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.661 [INFO][5090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.706 [INFO][5106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" HandleID="k8s-pod-network.3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.707 [INFO][5106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.707 [INFO][5106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.719 [WARNING][5106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" HandleID="k8s-pod-network.3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.719 [INFO][5106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" HandleID="k8s-pod-network.3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.722 [INFO][5106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:20.734188 containerd[1592]: 2025-09-09 00:33:20.728 [INFO][5090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:20.740131 systemd[1]: run-netns-cni\x2dfcaf664f\x2daa79\x2d4de4\x2d58b0\x2d595f3986613b.mount: Deactivated successfully. Sep 9 00:33:20.742096 containerd[1592]: time="2025-09-09T00:33:20.742049831Z" level=info msg="TearDown network for sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\" successfully" Sep 9 00:33:20.742194 containerd[1592]: time="2025-09-09T00:33:20.742173940Z" level=info msg="StopPodSandbox for \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\" returns successfully" Sep 9 00:33:20.742941 kubelet[2703]: E0909 00:33:20.742765 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:20.746129 containerd[1592]: time="2025-09-09T00:33:20.745130457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-45wzl,Uid:a104faab-ebd4-4510-b157-e4917f6c56e1,Namespace:kube-system,Attempt:1,}" Sep 9 00:33:20.752921 systemd-networkd[1247]: cali80856d16d3e: Gained IPv6LL Sep 9 00:33:20.767275 kubelet[2703]: E0909 00:33:20.766995 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:20.945004 systemd-networkd[1247]: calid2ae74cd4a7: Gained IPv6LL Sep 9 00:33:21.095061 systemd-networkd[1247]: cali25a961f0a5c: Link UP Sep 9 00:33:21.097295 systemd-networkd[1247]: cali25a961f0a5c: Gained carrier Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:20.900 [INFO][5115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0 coredns-7c65d6cfc9- kube-system a104faab-ebd4-4510-b157-e4917f6c56e1 1063 0 2025-09-09 00:32:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-45wzl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali25a961f0a5c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45wzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--45wzl-" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:20.900 [INFO][5115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45wzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:20.995 [INFO][5129] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" HandleID="k8s-pod-network.e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:20.995 [INFO][5129] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" HandleID="k8s-pod-network.e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000481540), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-45wzl", "timestamp":"2025-09-09 00:33:20.994999667 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:20.995 [INFO][5129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:20.995 [INFO][5129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:20.995 [INFO][5129] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.009 [INFO][5129] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" host="localhost" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.025 [INFO][5129] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.040 [INFO][5129] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.046 [INFO][5129] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.051 [INFO][5129] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.051 [INFO][5129] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" host="localhost" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.054 [INFO][5129] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.064 [INFO][5129] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" host="localhost" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.080 [INFO][5129] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" host="localhost" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.080 [INFO][5129] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" host="localhost" Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.080 [INFO][5129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:21.133034 containerd[1592]: 2025-09-09 00:33:21.080 [INFO][5129] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" HandleID="k8s-pod-network.e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:21.136032 containerd[1592]: 2025-09-09 00:33:21.085 [INFO][5115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45wzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a104faab-ebd4-4510-b157-e4917f6c56e1", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-45wzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25a961f0a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:21.136032 containerd[1592]: 2025-09-09 00:33:21.085 [INFO][5115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45wzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:21.136032 containerd[1592]: 2025-09-09 00:33:21.085 [INFO][5115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25a961f0a5c ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45wzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:21.136032 containerd[1592]: 2025-09-09 00:33:21.097 [INFO][5115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45wzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:21.136032 containerd[1592]: 2025-09-09 00:33:21.101 [INFO][5115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45wzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a104faab-ebd4-4510-b157-e4917f6c56e1", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb", Pod:"coredns-7c65d6cfc9-45wzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25a961f0a5c", MAC:"4a:04:94:75:cc:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:21.136032 containerd[1592]: 2025-09-09 00:33:21.126 [INFO][5115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-45wzl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:21.168205 containerd[1592]: time="2025-09-09T00:33:21.167530411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:33:21.169155 containerd[1592]: time="2025-09-09T00:33:21.168181503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:33:21.169155 containerd[1592]: time="2025-09-09T00:33:21.168203022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:21.170838 containerd[1592]: time="2025-09-09T00:33:21.170740646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:33:21.228398 systemd-resolved[1467]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:33:21.280338 containerd[1592]: time="2025-09-09T00:33:21.280183026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-45wzl,Uid:a104faab-ebd4-4510-b157-e4917f6c56e1,Namespace:kube-system,Attempt:1,} returns sandbox id \"e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb\"" Sep 9 00:33:21.281486 kubelet[2703]: E0909 00:33:21.281437 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:21.289648 containerd[1592]: time="2025-09-09T00:33:21.287038079Z" level=info msg="CreateContainer within sandbox \"e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:33:21.348098 containerd[1592]: time="2025-09-09T00:33:21.347862148Z" level=info msg="CreateContainer within sandbox \"e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"550ddf79fe6b104a7b69227322980b245308e50ddc3549919609e1b7f094f8fd\"" Sep 9 00:33:21.355740 containerd[1592]: time="2025-09-09T00:33:21.354722170Z" level=info msg="StartContainer for \"550ddf79fe6b104a7b69227322980b245308e50ddc3549919609e1b7f094f8fd\"" Sep 9 00:33:21.513742 containerd[1592]: time="2025-09-09T00:33:21.511270829Z" level=info msg="StartContainer for \"550ddf79fe6b104a7b69227322980b245308e50ddc3549919609e1b7f094f8fd\" returns successfully" Sep 9 00:33:21.682375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375977532.mount: Deactivated successfully. Sep 9 00:33:21.771337 kubelet[2703]: E0909 00:33:21.771154 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:21.771910 kubelet[2703]: E0909 00:33:21.771335 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:21.912902 kubelet[2703]: I0909 00:33:21.912574 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-45wzl" podStartSLOduration=39.91254405 podStartE2EDuration="39.91254405s" podCreationTimestamp="2025-09-09 00:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:33:21.912172765 +0000 UTC m=+43.497556468" watchObservedRunningTime="2025-09-09 00:33:21.91254405 +0000 UTC m=+43.497927753" Sep 9 00:33:21.967709 systemd-networkd[1247]: cali2274b1979bf: Gained IPv6LL Sep 9 00:33:22.506637 systemd[1]: Started sshd@8-10.0.0.108:22-10.0.0.1:52150.service - OpenSSH per-connection server daemon (10.0.0.1:52150). Sep 9 00:33:22.618704 sshd[5230]: Accepted publickey for core from 10.0.0.1 port 52150 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:33:22.621609 sshd[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:22.626799 systemd-logind[1548]: New session 9 of user core. Sep 9 00:33:22.635862 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:33:22.773145 kubelet[2703]: E0909 00:33:22.773032 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:22.786199 sshd[5230]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:22.794095 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:33:22.795528 systemd[1]: sshd@8-10.0.0.108:22-10.0.0.1:52150.service: Deactivated successfully. Sep 9 00:33:22.799727 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:33:22.801953 systemd-networkd[1247]: cali25a961f0a5c: Gained IPv6LL Sep 9 00:33:22.803757 systemd-logind[1548]: Removed session 9. Sep 9 00:33:22.950432 containerd[1592]: time="2025-09-09T00:33:22.950371866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:22.951106 containerd[1592]: time="2025-09-09T00:33:22.951058336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:33:22.952171 containerd[1592]: time="2025-09-09T00:33:22.952147181Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:22.956161 containerd[1592]: time="2025-09-09T00:33:22.956112704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:22.956931 containerd[1592]: time="2025-09-09T00:33:22.956902544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.885172724s" Sep 9 00:33:22.956983 containerd[1592]: time="2025-09-09T00:33:22.956935986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:33:22.957804 containerd[1592]: time="2025-09-09T00:33:22.957786590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:33:22.958847 containerd[1592]: time="2025-09-09T00:33:22.958800495Z" level=info msg="CreateContainer within sandbox \"b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:33:22.972886 containerd[1592]: time="2025-09-09T00:33:22.972845961Z" level=info msg="CreateContainer within sandbox \"b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f64323706c6864ad45c22740f6b602196a6b63a6e4f8352ec6869fbd0a6b68d2\"" Sep 9 00:33:22.973434 containerd[1592]: time="2025-09-09T00:33:22.973295392Z" level=info msg="StartContainer for \"f64323706c6864ad45c22740f6b602196a6b63a6e4f8352ec6869fbd0a6b68d2\"" Sep 9 00:33:23.294309 containerd[1592]: time="2025-09-09T00:33:23.294252553Z" level=info msg="StartContainer for \"f64323706c6864ad45c22740f6b602196a6b63a6e4f8352ec6869fbd0a6b68d2\" returns successfully" Sep 9 00:33:23.779372 kubelet[2703]: E0909 00:33:23.779064 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:23.790831 kubelet[2703]: I0909 00:33:23.790769 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-csr6g" podStartSLOduration=24.195222541 podStartE2EDuration="29.79074482s" podCreationTimestamp="2025-09-09 00:32:54 +0000 UTC" firstStartedPulling="2025-09-09 00:33:17.362152624 +0000 UTC m=+38.947536327" lastFinishedPulling="2025-09-09 00:33:22.957674903 +0000 UTC m=+44.543058606" observedRunningTime="2025-09-09 00:33:23.789206982 +0000 UTC m=+45.374590685" watchObservedRunningTime="2025-09-09 00:33:23.79074482 +0000 UTC m=+45.376128523" Sep 9 00:33:24.729712 containerd[1592]: time="2025-09-09T00:33:24.729655292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:24.730465 containerd[1592]: time="2025-09-09T00:33:24.730398812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:33:24.732644 containerd[1592]: time="2025-09-09T00:33:24.732609807Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:24.734770 containerd[1592]: time="2025-09-09T00:33:24.734733708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:24.735340 containerd[1592]: time="2025-09-09T00:33:24.735289341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.777478336s" Sep 9 00:33:24.735380 containerd[1592]: time="2025-09-09T00:33:24.735337781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:33:24.736521 containerd[1592]: time="2025-09-09T00:33:24.736333017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:33:24.737681 containerd[1592]: time="2025-09-09T00:33:24.737639924Z" level=info msg="CreateContainer within sandbox \"3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:33:24.754189 containerd[1592]: time="2025-09-09T00:33:24.754148438Z" level=info msg="CreateContainer within sandbox \"3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d6ba0b8fa0022ae15882f216e9e4569b11a137832df04620808333dcf1efdb10\"" Sep 9 00:33:24.755330 containerd[1592]: time="2025-09-09T00:33:24.755207364Z" level=info msg="StartContainer for \"d6ba0b8fa0022ae15882f216e9e4569b11a137832df04620808333dcf1efdb10\"" Sep 9 00:33:24.834479 containerd[1592]: time="2025-09-09T00:33:24.834360103Z" level=info msg="StartContainer for \"d6ba0b8fa0022ae15882f216e9e4569b11a137832df04620808333dcf1efdb10\" returns successfully" Sep 9 00:33:27.417781 containerd[1592]: time="2025-09-09T00:33:27.417697947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:27.418558 containerd[1592]: time="2025-09-09T00:33:27.418517075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:33:27.419963 containerd[1592]: time="2025-09-09T00:33:27.419912078Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:27.427036 containerd[1592]: time="2025-09-09T00:33:27.426996827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:27.427977 containerd[1592]: time="2025-09-09T00:33:27.427941509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.691568548s" Sep 9 00:33:27.428042 containerd[1592]: time="2025-09-09T00:33:27.427984710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:33:27.429486 containerd[1592]: time="2025-09-09T00:33:27.429460583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:33:27.440753 containerd[1592]: time="2025-09-09T00:33:27.440709010Z" level=info msg="CreateContainer within sandbox \"72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:33:27.458478 containerd[1592]: time="2025-09-09T00:33:27.458435274Z" level=info msg="CreateContainer within sandbox \"72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0aa7f567065e614b5432ffd6165234b34df0e1519460d87d1e4d90a3f3874d9a\"" Sep 9 00:33:27.459267 containerd[1592]: time="2025-09-09T00:33:27.459242350Z" level=info msg="StartContainer for \"0aa7f567065e614b5432ffd6165234b34df0e1519460d87d1e4d90a3f3874d9a\"" Sep 9 00:33:27.540565 containerd[1592]: time="2025-09-09T00:33:27.540522602Z" level=info msg="StartContainer for \"0aa7f567065e614b5432ffd6165234b34df0e1519460d87d1e4d90a3f3874d9a\" returns successfully" Sep 9 00:33:27.795767 systemd[1]: Started sshd@9-10.0.0.108:22-10.0.0.1:52164.service - OpenSSH per-connection server daemon (10.0.0.1:52164). Sep 9 00:33:27.811081 kubelet[2703]: I0909 00:33:27.811002 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55bfc5d889-9l6s4" podStartSLOduration=24.351254352 podStartE2EDuration="32.810978005s" podCreationTimestamp="2025-09-09 00:32:55 +0000 UTC" firstStartedPulling="2025-09-09 00:33:18.969585428 +0000 UTC m=+40.554969131" lastFinishedPulling="2025-09-09 00:33:27.429309081 +0000 UTC m=+49.014692784" observedRunningTime="2025-09-09 00:33:27.810248634 +0000 UTC m=+49.395632347" watchObservedRunningTime="2025-09-09 00:33:27.810978005 +0000 UTC m=+49.396361708" Sep 9 00:33:27.853006 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 52164 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:33:27.856188 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:27.866988 systemd-logind[1548]: New session 10 of user core. Sep 9 00:33:27.875910 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:33:28.204944 sshd[5475]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:28.209083 systemd[1]: sshd@9-10.0.0.108:22-10.0.0.1:52164.service: Deactivated successfully. Sep 9 00:33:28.211728 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:33:28.211749 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:33:28.213073 systemd-logind[1548]: Removed session 10. Sep 9 00:33:31.184026 containerd[1592]: time="2025-09-09T00:33:31.183954957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:31.184813 containerd[1592]: time="2025-09-09T00:33:31.184717919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:33:31.185941 containerd[1592]: time="2025-09-09T00:33:31.185902221Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:31.188283 containerd[1592]: time="2025-09-09T00:33:31.188246943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:31.188969 containerd[1592]: time="2025-09-09T00:33:31.188939203Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.759364206s" Sep 9 00:33:31.189032 containerd[1592]: time="2025-09-09T00:33:31.188975671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:33:31.190202 containerd[1592]: time="2025-09-09T00:33:31.189857967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:33:31.190878 containerd[1592]: time="2025-09-09T00:33:31.190854487Z" level=info msg="CreateContainer within sandbox \"7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:33:31.207716 containerd[1592]: time="2025-09-09T00:33:31.207661237Z" level=info msg="CreateContainer within sandbox \"7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2f28cd8c89ef75ae39ed42dcc30de8990072df9b985058043b0ef3a3731cd607\"" Sep 9 00:33:31.208528 containerd[1592]: time="2025-09-09T00:33:31.208399904Z" level=info msg="StartContainer for \"2f28cd8c89ef75ae39ed42dcc30de8990072df9b985058043b0ef3a3731cd607\"" Sep 9 00:33:31.291884 containerd[1592]: time="2025-09-09T00:33:31.291816329Z" level=info msg="StartContainer for \"2f28cd8c89ef75ae39ed42dcc30de8990072df9b985058043b0ef3a3731cd607\" returns successfully" Sep 9 00:33:31.645191 containerd[1592]: time="2025-09-09T00:33:31.645101918Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:31.646048 containerd[1592]: time="2025-09-09T00:33:31.645980416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:33:31.647977 containerd[1592]: time="2025-09-09T00:33:31.647938813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 458.05096ms" Sep 9 00:33:31.648039 containerd[1592]: time="2025-09-09T00:33:31.647978457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:33:31.649304 containerd[1592]: time="2025-09-09T00:33:31.649065127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:33:31.650386 containerd[1592]: time="2025-09-09T00:33:31.650352755Z" level=info msg="CreateContainer within sandbox \"e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:33:31.666536 containerd[1592]: time="2025-09-09T00:33:31.666487141Z" level=info msg="CreateContainer within sandbox \"e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c8c51bd19882eeab4afe07d6601c88b2413a9bb428baf06deb2f3de8f16cc0dd\"" Sep 9 00:33:31.667362 containerd[1592]: time="2025-09-09T00:33:31.667112365Z" level=info msg="StartContainer for \"c8c51bd19882eeab4afe07d6601c88b2413a9bb428baf06deb2f3de8f16cc0dd\"" Sep 9 00:33:31.787707 containerd[1592]: time="2025-09-09T00:33:31.787558851Z" level=info msg="StartContainer for \"c8c51bd19882eeab4afe07d6601c88b2413a9bb428baf06deb2f3de8f16cc0dd\" returns successfully" Sep 9 00:33:31.848798 kubelet[2703]: I0909 00:33:31.848733 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6544c75f8-qw6bt" podStartSLOduration=28.809617047 podStartE2EDuration="39.848708087s" podCreationTimestamp="2025-09-09 00:32:52 +0000 UTC" firstStartedPulling="2025-09-09 00:33:20.609679033 +0000 UTC m=+42.195062736" lastFinishedPulling="2025-09-09 00:33:31.648770073 +0000 UTC m=+53.234153776" observedRunningTime="2025-09-09 00:33:31.847477186 +0000 UTC m=+53.432860889" watchObservedRunningTime="2025-09-09 00:33:31.848708087 +0000 UTC m=+53.434091800" Sep 9 00:33:31.849366 kubelet[2703]: I0909 00:33:31.848880 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6544c75f8-n8q9k" podStartSLOduration=27.741509815 podStartE2EDuration="39.84887471s" podCreationTimestamp="2025-09-09 00:32:52 +0000 UTC" firstStartedPulling="2025-09-09 00:33:19.082376093 +0000 UTC m=+40.667759796" lastFinishedPulling="2025-09-09 00:33:31.189740968 +0000 UTC m=+52.775124691" observedRunningTime="2025-09-09 00:33:31.833398028 +0000 UTC m=+53.418781731" watchObservedRunningTime="2025-09-09 00:33:31.84887471 +0000 UTC m=+53.434258413" Sep 9 00:33:32.807324 kubelet[2703]: I0909 00:33:32.807287 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:33:33.227630 systemd[1]: Started sshd@10-10.0.0.108:22-10.0.0.1:46424.service - OpenSSH per-connection server daemon (10.0.0.1:46424). Sep 9 00:33:33.380559 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 46424 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:33:33.390720 sshd[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:33.401451 systemd-logind[1548]: New session 11 of user core. Sep 9 00:33:33.413492 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:33:33.807638 sshd[5609]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:33.823546 systemd[1]: Started sshd@11-10.0.0.108:22-10.0.0.1:46430.service - OpenSSH per-connection server daemon (10.0.0.1:46430). Sep 9 00:33:33.824814 systemd[1]: sshd@10-10.0.0.108:22-10.0.0.1:46424.service: Deactivated successfully. Sep 9 00:33:33.838050 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:33:33.840553 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:33:33.846779 systemd-logind[1548]: Removed session 11. Sep 9 00:33:34.097911 sshd[5632]: Accepted publickey for core from 10.0.0.1 port 46430 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:33:34.099808 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:34.105090 systemd-logind[1548]: New session 12 of user core. Sep 9 00:33:34.120982 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:33:34.221438 containerd[1592]: time="2025-09-09T00:33:34.221318769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:34.223460 containerd[1592]: time="2025-09-09T00:33:34.223371678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:33:34.224713 containerd[1592]: time="2025-09-09T00:33:34.224643976Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:34.228286 containerd[1592]: time="2025-09-09T00:33:34.228122493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:33:34.228888 containerd[1592]: time="2025-09-09T00:33:34.228817322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.579705978s" Sep 9 00:33:34.228888 containerd[1592]: time="2025-09-09T00:33:34.228872666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:33:34.232205 containerd[1592]: time="2025-09-09T00:33:34.232145986Z" level=info msg="CreateContainer within sandbox \"3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:33:34.287153 containerd[1592]: time="2025-09-09T00:33:34.287083640Z" level=info msg="CreateContainer within sandbox \"3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4881747f29fdc22b369ecbf334eb8a9ebee3d9e6dec07e2d1e9f1938c7504b14\"" Sep 9 00:33:34.288029 containerd[1592]: time="2025-09-09T00:33:34.287917102Z" level=info msg="StartContainer for \"4881747f29fdc22b369ecbf334eb8a9ebee3d9e6dec07e2d1e9f1938c7504b14\"" Sep 9 00:33:34.312367 kubelet[2703]: I0909 00:33:34.312274 2703 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:33:34.515517 containerd[1592]: time="2025-09-09T00:33:34.515362784Z" level=info msg="StartContainer for \"4881747f29fdc22b369ecbf334eb8a9ebee3d9e6dec07e2d1e9f1938c7504b14\" returns successfully" Sep 9 00:33:34.546927 sshd[5632]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:34.555699 systemd[1]: Started sshd@12-10.0.0.108:22-10.0.0.1:46444.service - OpenSSH per-connection server daemon (10.0.0.1:46444). Sep 9 00:33:34.556254 systemd[1]: sshd@11-10.0.0.108:22-10.0.0.1:46430.service: Deactivated successfully. Sep 9 00:33:34.561195 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:33:34.561821 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:33:34.563849 systemd-logind[1548]: Removed session 12. Sep 9 00:33:34.592242 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 46444 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:33:34.594112 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:34.599027 systemd-logind[1548]: New session 13 of user core. Sep 9 00:33:34.609673 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:33:34.612992 kubelet[2703]: I0909 00:33:34.612958 2703 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:33:34.613074 kubelet[2703]: I0909 00:33:34.613003 2703 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:33:34.746730 sshd[5686]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:34.755142 systemd[1]: sshd@12-10.0.0.108:22-10.0.0.1:46444.service: Deactivated successfully. Sep 9 00:33:34.762735 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:33:34.764012 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:33:34.766626 systemd-logind[1548]: Removed session 13. Sep 9 00:33:34.831552 systemd-resolved[1467]: Under memory pressure, flushing caches. Sep 9 00:33:34.831618 systemd-resolved[1467]: Flushed all caches. Sep 9 00:33:34.833446 systemd-journald[1155]: Under memory pressure, flushing caches. Sep 9 00:33:34.835812 kubelet[2703]: I0909 00:33:34.835634 2703 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dvfmz" podStartSLOduration=23.692921853 podStartE2EDuration="39.835609264s" podCreationTimestamp="2025-09-09 00:32:55 +0000 UTC" firstStartedPulling="2025-09-09 00:33:18.087183507 +0000 UTC m=+39.672567210" lastFinishedPulling="2025-09-09 00:33:34.229870908 +0000 UTC m=+55.815254621" observedRunningTime="2025-09-09 00:33:34.835170587 +0000 UTC m=+56.420554321" watchObservedRunningTime="2025-09-09 00:33:34.835609264 +0000 UTC m=+56.420992967" Sep 9 00:33:38.494994 containerd[1592]: time="2025-09-09T00:33:38.494936660Z" level=info msg="StopPodSandbox for \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\"" Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.557 [WARNING][5717] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a104faab-ebd4-4510-b157-e4917f6c56e1", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb", Pod:"coredns-7c65d6cfc9-45wzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25a961f0a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.558 [INFO][5717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.558 [INFO][5717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" iface="eth0" netns="" Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.558 [INFO][5717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.558 [INFO][5717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.584 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" HandleID="k8s-pod-network.3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.584 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.584 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.591 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" HandleID="k8s-pod-network.3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.591 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" HandleID="k8s-pod-network.3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.593 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:38.603966 containerd[1592]: 2025-09-09 00:33:38.600 [INFO][5717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:38.612342 containerd[1592]: time="2025-09-09T00:33:38.612266075Z" level=info msg="TearDown network for sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\" successfully" Sep 9 00:33:38.612342 containerd[1592]: time="2025-09-09T00:33:38.612335126Z" level=info msg="StopPodSandbox for \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\" returns successfully" Sep 9 00:33:38.670891 containerd[1592]: time="2025-09-09T00:33:38.670811713Z" level=info msg="RemovePodSandbox for \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\"" Sep 9 00:33:38.673257 containerd[1592]: time="2025-09-09T00:33:38.673221387Z" level=info msg="Forcibly stopping sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\"" Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.707 [WARNING][5745] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a104faab-ebd4-4510-b157-e4917f6c56e1", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e559ee1110580af51952b91ae4e25ca807cc2601d1332bd4d42c3aafdeec4fbb", Pod:"coredns-7c65d6cfc9-45wzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25a961f0a5c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.707 [INFO][5745] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.707 [INFO][5745] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" iface="eth0" netns="" Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.707 [INFO][5745] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.707 [INFO][5745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.729 [INFO][5753] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" HandleID="k8s-pod-network.3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.729 [INFO][5753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.729 [INFO][5753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.737 [WARNING][5753] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" HandleID="k8s-pod-network.3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.737 [INFO][5753] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" HandleID="k8s-pod-network.3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Workload="localhost-k8s-coredns--7c65d6cfc9--45wzl-eth0" Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.739 [INFO][5753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:38.745782 containerd[1592]: 2025-09-09 00:33:38.742 [INFO][5745] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7" Sep 9 00:33:38.745782 containerd[1592]: time="2025-09-09T00:33:38.745728405Z" level=info msg="TearDown network for sandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\" successfully" Sep 9 00:33:39.360308 containerd[1592]: time="2025-09-09T00:33:39.360236803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:39.360483 containerd[1592]: time="2025-09-09T00:33:39.360345509Z" level=info msg="RemovePodSandbox \"3663276c0852bd66d1bd2c667fe721058a9725505d67f72a86fbeb01c8f4e4d7\" returns successfully" Sep 9 00:33:39.371335 containerd[1592]: time="2025-09-09T00:33:39.371308144Z" level=info msg="StopPodSandbox for \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\"" Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.406 [WARNING][5771] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ec5d7db1-2706-41c7-b992-bd43c3dcfac0", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33", Pod:"coredns-7c65d6cfc9-4m9xw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2ae74cd4a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.407 [INFO][5771] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.407 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" iface="eth0" netns="" Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.407 [INFO][5771] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.407 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.437 [INFO][5779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" HandleID="k8s-pod-network.99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.437 [INFO][5779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.437 [INFO][5779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.552 [WARNING][5779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" HandleID="k8s-pod-network.99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.552 [INFO][5779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" HandleID="k8s-pod-network.99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.554 [INFO][5779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:39.561807 containerd[1592]: 2025-09-09 00:33:39.558 [INFO][5771] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:39.563297 containerd[1592]: time="2025-09-09T00:33:39.561866636Z" level=info msg="TearDown network for sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\" successfully" Sep 9 00:33:39.563297 containerd[1592]: time="2025-09-09T00:33:39.561898446Z" level=info msg="StopPodSandbox for \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\" returns successfully" Sep 9 00:33:39.563297 containerd[1592]: time="2025-09-09T00:33:39.562787873Z" level=info msg="RemovePodSandbox for \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\"" Sep 9 00:33:39.563297 containerd[1592]: time="2025-09-09T00:33:39.562840432Z" level=info msg="Forcibly stopping sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\"" Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.618 [WARNING][5796] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ec5d7db1-2706-41c7-b992-bd43c3dcfac0", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b4f37f569cc27ad0217f48a9e04cc951dd2046c27766b45d3331379418424d33", Pod:"coredns-7c65d6cfc9-4m9xw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2ae74cd4a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.618 [INFO][5796] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.618 [INFO][5796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" iface="eth0" netns="" Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.618 [INFO][5796] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.618 [INFO][5796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.642 [INFO][5804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" HandleID="k8s-pod-network.99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.642 [INFO][5804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.642 [INFO][5804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.647 [WARNING][5804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" HandleID="k8s-pod-network.99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.647 [INFO][5804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" HandleID="k8s-pod-network.99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Workload="localhost-k8s-coredns--7c65d6cfc9--4m9xw-eth0" Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.649 [INFO][5804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:39.655191 containerd[1592]: 2025-09-09 00:33:39.652 [INFO][5796] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e" Sep 9 00:33:39.655191 containerd[1592]: time="2025-09-09T00:33:39.655172761Z" level=info msg="TearDown network for sandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\" successfully" Sep 9 00:33:39.696713 containerd[1592]: time="2025-09-09T00:33:39.696618760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:39.696890 containerd[1592]: time="2025-09-09T00:33:39.696734851Z" level=info msg="RemovePodSandbox \"99a29b286bc17ccc596281946947470e062d46364a22d7b602e1cec39d31bc2e\" returns successfully" Sep 9 00:33:39.697318 containerd[1592]: time="2025-09-09T00:33:39.697284633Z" level=info msg="StopPodSandbox for \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\"" Sep 9 00:33:39.761721 systemd[1]: Started sshd@13-10.0.0.108:22-10.0.0.1:46456.service - OpenSSH per-connection server daemon (10.0.0.1:46456). Sep 9 00:33:39.800173 sshd[5829]: Accepted publickey for core from 10.0.0.1 port 46456 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:33:39.802186 sshd[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:39.806345 systemd-logind[1548]: New session 14 of user core. Sep 9 00:33:39.812669 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.879 [WARNING][5821] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0", GenerateName:"calico-kube-controllers-55bfc5d889-", Namespace:"calico-system", SelfLink:"", UID:"25010456-202e-41ea-aa9f-fe497ae64e66", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55bfc5d889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80", Pod:"calico-kube-controllers-55bfc5d889-9l6s4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie5ed1746639", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.880 [INFO][5821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.880 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" iface="eth0" netns="" Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.880 [INFO][5821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.880 [INFO][5821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.906 [INFO][5842] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" HandleID="k8s-pod-network.dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.906 [INFO][5842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.906 [INFO][5842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.912 [WARNING][5842] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" HandleID="k8s-pod-network.dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.912 [INFO][5842] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" HandleID="k8s-pod-network.dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.913 [INFO][5842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:39.920295 containerd[1592]: 2025-09-09 00:33:39.916 [INFO][5821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:39.920295 containerd[1592]: time="2025-09-09T00:33:39.920272861Z" level=info msg="TearDown network for sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\" successfully" Sep 9 00:33:39.920972 containerd[1592]: time="2025-09-09T00:33:39.920300082Z" level=info msg="StopPodSandbox for \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\" returns successfully" Sep 9 00:33:39.920972 containerd[1592]: time="2025-09-09T00:33:39.920894078Z" level=info msg="RemovePodSandbox for \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\"" Sep 9 00:33:39.920972 containerd[1592]: time="2025-09-09T00:33:39.920929766Z" level=info msg="Forcibly stopping sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\"" Sep 9 00:33:40.092683 sshd[5829]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:40.097591 systemd[1]: sshd@13-10.0.0.108:22-10.0.0.1:46456.service: Deactivated successfully. Sep 9 00:33:40.101308 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:33:40.101652 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:33:40.103710 systemd-logind[1548]: Removed session 14. Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.088 [WARNING][5859] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0", GenerateName:"calico-kube-controllers-55bfc5d889-", Namespace:"calico-system", SelfLink:"", UID:"25010456-202e-41ea-aa9f-fe497ae64e66", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55bfc5d889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72616e094ada893052a5655b686c16af5c95a45ce366e628808970b2352d5f80", Pod:"calico-kube-controllers-55bfc5d889-9l6s4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie5ed1746639", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.089 [INFO][5859] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.089 [INFO][5859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" iface="eth0" netns="" Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.089 [INFO][5859] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.089 [INFO][5859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.113 [INFO][5868] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" HandleID="k8s-pod-network.dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.113 [INFO][5868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.113 [INFO][5868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.119 [WARNING][5868] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" HandleID="k8s-pod-network.dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.119 [INFO][5868] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" HandleID="k8s-pod-network.dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Workload="localhost-k8s-calico--kube--controllers--55bfc5d889--9l6s4-eth0" Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.120 [INFO][5868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.126404 containerd[1592]: 2025-09-09 00:33:40.123 [INFO][5859] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a" Sep 9 00:33:40.126892 containerd[1592]: time="2025-09-09T00:33:40.126482355Z" level=info msg="TearDown network for sandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\" successfully" Sep 9 00:33:40.136957 containerd[1592]: time="2025-09-09T00:33:40.136915798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:40.137082 containerd[1592]: time="2025-09-09T00:33:40.136983517Z" level=info msg="RemovePodSandbox \"dc974c20813b8cc8d10aa01c404cb83680815b33858df5921f8aaa95fbe0dd7a\" returns successfully" Sep 9 00:33:40.137504 containerd[1592]: time="2025-09-09T00:33:40.137476823Z" level=info msg="StopPodSandbox for \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\"" Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.172 [WARNING][5889] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvfmz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45baac1d-c9f0-4704-a887-7b015b292f0b", ResourceVersion:"1211", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d", Pod:"csi-node-driver-dvfmz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbbcfddec73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.172 [INFO][5889] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.172 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" iface="eth0" netns="" Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.172 [INFO][5889] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.172 [INFO][5889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.196 [INFO][5898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" HandleID="k8s-pod-network.31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.196 [INFO][5898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.196 [INFO][5898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.203 [WARNING][5898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" HandleID="k8s-pod-network.31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.203 [INFO][5898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" HandleID="k8s-pod-network.31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.204 [INFO][5898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.212632 containerd[1592]: 2025-09-09 00:33:40.208 [INFO][5889] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:40.212632 containerd[1592]: time="2025-09-09T00:33:40.212182884Z" level=info msg="TearDown network for sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\" successfully" Sep 9 00:33:40.212632 containerd[1592]: time="2025-09-09T00:33:40.212212451Z" level=info msg="StopPodSandbox for \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\" returns successfully" Sep 9 00:33:40.213838 containerd[1592]: time="2025-09-09T00:33:40.212801769Z" level=info msg="RemovePodSandbox for \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\"" Sep 9 00:33:40.213838 containerd[1592]: time="2025-09-09T00:33:40.212831887Z" level=info msg="Forcibly stopping sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\"" Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.312 [WARNING][5917] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvfmz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"45baac1d-c9f0-4704-a887-7b015b292f0b", ResourceVersion:"1211", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3970af2ee6d7af935efa54112c32cb7fae6ee0f92b0f900a72d4cb914a71478d", Pod:"csi-node-driver-dvfmz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbbcfddec73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.312 [INFO][5917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.312 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" iface="eth0" netns="" Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.312 [INFO][5917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.312 [INFO][5917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.335 [INFO][5926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" HandleID="k8s-pod-network.31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.335 [INFO][5926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.335 [INFO][5926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.341 [WARNING][5926] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" HandleID="k8s-pod-network.31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.341 [INFO][5926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" HandleID="k8s-pod-network.31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Workload="localhost-k8s-csi--node--driver--dvfmz-eth0" Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.343 [INFO][5926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.349672 containerd[1592]: 2025-09-09 00:33:40.346 [INFO][5917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793" Sep 9 00:33:40.350163 containerd[1592]: time="2025-09-09T00:33:40.349745287Z" level=info msg="TearDown network for sandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\" successfully" Sep 9 00:33:40.355642 containerd[1592]: time="2025-09-09T00:33:40.355573429Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:40.355687 containerd[1592]: time="2025-09-09T00:33:40.355669421Z" level=info msg="RemovePodSandbox \"31677aded0820c2ce1d921994768b27533b6946692105fd6396b357e4cb8c793\" returns successfully" Sep 9 00:33:40.356287 containerd[1592]: time="2025-09-09T00:33:40.356237229Z" level=info msg="StopPodSandbox for \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\"" Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.394 [WARNING][5943] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0", GenerateName:"calico-apiserver-6544c75f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e073184d-6f60-4919-94e6-d04e6ac8bc91", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6544c75f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50", Pod:"calico-apiserver-6544c75f8-qw6bt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2274b1979bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.395 [INFO][5943] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.395 [INFO][5943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" iface="eth0" netns="" Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.395 [INFO][5943] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.395 [INFO][5943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.419 [INFO][5953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" HandleID="k8s-pod-network.840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.419 [INFO][5953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.420 [INFO][5953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.426 [WARNING][5953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" HandleID="k8s-pod-network.840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.426 [INFO][5953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" HandleID="k8s-pod-network.840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.429 [INFO][5953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.438395 containerd[1592]: 2025-09-09 00:33:40.434 [INFO][5943] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:40.438896 containerd[1592]: time="2025-09-09T00:33:40.438466921Z" level=info msg="TearDown network for sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\" successfully" Sep 9 00:33:40.438896 containerd[1592]: time="2025-09-09T00:33:40.438494924Z" level=info msg="StopPodSandbox for \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\" returns successfully" Sep 9 00:33:40.439087 containerd[1592]: time="2025-09-09T00:33:40.439022024Z" level=info msg="RemovePodSandbox for \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\"" Sep 9 00:33:40.439087 containerd[1592]: time="2025-09-09T00:33:40.439067701Z" level=info msg="Forcibly stopping sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\"" Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.475 [WARNING][5970] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0", GenerateName:"calico-apiserver-6544c75f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"e073184d-6f60-4919-94e6-d04e6ac8bc91", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6544c75f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9ab2a1225323c0a3c4d5bebe9ba3e19ed28b4ccaf3e4ef6c4fd3bb898423c50", Pod:"calico-apiserver-6544c75f8-qw6bt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2274b1979bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.475 [INFO][5970] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.475 [INFO][5970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" iface="eth0" netns="" Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.475 [INFO][5970] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.475 [INFO][5970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.499 [INFO][5979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" HandleID="k8s-pod-network.840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.499 [INFO][5979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.499 [INFO][5979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.505 [WARNING][5979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" HandleID="k8s-pod-network.840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.505 [INFO][5979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" HandleID="k8s-pod-network.840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Workload="localhost-k8s-calico--apiserver--6544c75f8--qw6bt-eth0" Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.511 [INFO][5979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.519028 containerd[1592]: 2025-09-09 00:33:40.514 [INFO][5970] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486" Sep 9 00:33:40.519028 containerd[1592]: time="2025-09-09T00:33:40.519005423Z" level=info msg="TearDown network for sandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\" successfully" Sep 9 00:33:40.525328 containerd[1592]: time="2025-09-09T00:33:40.525267267Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:40.525328 containerd[1592]: time="2025-09-09T00:33:40.525332552Z" level=info msg="RemovePodSandbox \"840ec3158e679d69dff21a50896b36dbde90719eff79e14b22894ff7dc8f2486\" returns successfully" Sep 9 00:33:40.526480 containerd[1592]: time="2025-09-09T00:33:40.525964030Z" level=info msg="StopPodSandbox for \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\"" Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.563 [WARNING][5999] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--csr6g-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"1a0e64da-2e8f-4229-9092-4e3f71b7565b", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77", Pod:"goldmane-7988f88666-csr6g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cd6eb22e43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.563 [INFO][5999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.563 [INFO][5999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" iface="eth0" netns="" Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.563 [INFO][5999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.563 [INFO][5999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.591 [INFO][6009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" HandleID="k8s-pod-network.41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.592 [INFO][6009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.592 [INFO][6009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.597 [WARNING][6009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" HandleID="k8s-pod-network.41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.597 [INFO][6009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" HandleID="k8s-pod-network.41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.599 [INFO][6009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.606136 containerd[1592]: 2025-09-09 00:33:40.603 [INFO][5999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:40.607907 containerd[1592]: time="2025-09-09T00:33:40.606184649Z" level=info msg="TearDown network for sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\" successfully" Sep 9 00:33:40.607907 containerd[1592]: time="2025-09-09T00:33:40.606212761Z" level=info msg="StopPodSandbox for \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\" returns successfully" Sep 9 00:33:40.607907 containerd[1592]: time="2025-09-09T00:33:40.606820595Z" level=info msg="RemovePodSandbox for \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\"" Sep 9 00:33:40.607907 containerd[1592]: time="2025-09-09T00:33:40.606866262Z" level=info msg="Forcibly stopping sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\"" Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.661 [WARNING][6026] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--csr6g-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"1a0e64da-2e8f-4229-9092-4e3f71b7565b", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7c1aa64bb7acb9829411e84de7fcc335853c9365924ac1d62fb591488d31d77", Pod:"goldmane-7988f88666-csr6g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7cd6eb22e43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.662 [INFO][6026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.662 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" iface="eth0" netns="" Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.662 [INFO][6026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.662 [INFO][6026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.691 [INFO][6036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" HandleID="k8s-pod-network.41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.692 [INFO][6036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.692 [INFO][6036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.699 [WARNING][6036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" HandleID="k8s-pod-network.41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.699 [INFO][6036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" HandleID="k8s-pod-network.41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Workload="localhost-k8s-goldmane--7988f88666--csr6g-eth0" Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.700 [INFO][6036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.706403 containerd[1592]: 2025-09-09 00:33:40.703 [INFO][6026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a" Sep 9 00:33:40.706856 containerd[1592]: time="2025-09-09T00:33:40.706455782Z" level=info msg="TearDown network for sandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\" successfully" Sep 9 00:33:40.710593 containerd[1592]: time="2025-09-09T00:33:40.710562938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:40.710656 containerd[1592]: time="2025-09-09T00:33:40.710632550Z" level=info msg="RemovePodSandbox \"41eaf270ad07ea5dec1b464304cb6153982cf5ead00602cfbaf34784c065a96a\" returns successfully" Sep 9 00:33:40.711287 containerd[1592]: time="2025-09-09T00:33:40.711245984Z" level=info msg="StopPodSandbox for \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\"" Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.746 [WARNING][6056] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0", GenerateName:"calico-apiserver-6544c75f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"cba14676-c2d2-4393-a6fd-b4ef0dc67fba", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6544c75f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa", Pod:"calico-apiserver-6544c75f8-n8q9k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80856d16d3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.746 [INFO][6056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.746 [INFO][6056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" iface="eth0" netns="" Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.746 [INFO][6056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.746 [INFO][6056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.767 [INFO][6064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" HandleID="k8s-pod-network.d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.767 [INFO][6064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.767 [INFO][6064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.773 [WARNING][6064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" HandleID="k8s-pod-network.d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.773 [INFO][6064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" HandleID="k8s-pod-network.d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.774 [INFO][6064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.781404 containerd[1592]: 2025-09-09 00:33:40.777 [INFO][6056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:40.781404 containerd[1592]: time="2025-09-09T00:33:40.781353558Z" level=info msg="TearDown network for sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\" successfully" Sep 9 00:33:40.781404 containerd[1592]: time="2025-09-09T00:33:40.781379959Z" level=info msg="StopPodSandbox for \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\" returns successfully" Sep 9 00:33:40.781938 containerd[1592]: time="2025-09-09T00:33:40.781857224Z" level=info msg="RemovePodSandbox for \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\"" Sep 9 00:33:40.781938 containerd[1592]: time="2025-09-09T00:33:40.781884546Z" level=info msg="Forcibly stopping sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\"" Sep 9 00:33:40.849467 systemd-resolved[1467]: Under memory pressure, flushing caches. Sep 9 00:33:40.850532 systemd-journald[1155]: Under memory pressure, flushing caches. Sep 9 00:33:40.849507 systemd-resolved[1467]: Flushed all caches. Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.816 [WARNING][6082] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0", GenerateName:"calico-apiserver-6544c75f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"cba14676-c2d2-4393-a6fd-b4ef0dc67fba", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 32, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6544c75f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a8a6c8d815ec0d35ff080da67491b7d80f4932c9d7c2135d61a96b494deadaa", Pod:"calico-apiserver-6544c75f8-n8q9k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali80856d16d3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.816 [INFO][6082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.816 [INFO][6082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" iface="eth0" netns="" Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.816 [INFO][6082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.816 [INFO][6082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.844 [INFO][6092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" HandleID="k8s-pod-network.d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.844 [INFO][6092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.844 [INFO][6092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.854 [WARNING][6092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" HandleID="k8s-pod-network.d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.854 [INFO][6092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" HandleID="k8s-pod-network.d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Workload="localhost-k8s-calico--apiserver--6544c75f8--n8q9k-eth0" Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.855 [INFO][6092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.862237 containerd[1592]: 2025-09-09 00:33:40.859 [INFO][6082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3" Sep 9 00:33:40.862720 containerd[1592]: time="2025-09-09T00:33:40.862268865Z" level=info msg="TearDown network for sandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\" successfully" Sep 9 00:33:40.867526 containerd[1592]: time="2025-09-09T00:33:40.867489855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:40.867577 containerd[1592]: time="2025-09-09T00:33:40.867561951Z" level=info msg="RemovePodSandbox \"d2e24a1ecc7d9da02b3a4c3b268eb9aa6520b3498cba4e49f0f6eac8888917f3\" returns successfully" Sep 9 00:33:40.868195 containerd[1592]: time="2025-09-09T00:33:40.868152743Z" level=info msg="StopPodSandbox for \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\"" Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.901 [WARNING][6109] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" WorkloadEndpoint="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.901 [INFO][6109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.901 [INFO][6109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" iface="eth0" netns="" Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.901 [INFO][6109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.901 [INFO][6109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.923 [INFO][6117] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" HandleID="k8s-pod-network.28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Workload="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.923 [INFO][6117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.923 [INFO][6117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.929 [WARNING][6117] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" HandleID="k8s-pod-network.28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Workload="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.929 [INFO][6117] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" HandleID="k8s-pod-network.28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Workload="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.930 [INFO][6117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:40.936197 containerd[1592]: 2025-09-09 00:33:40.933 [INFO][6109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:40.936632 containerd[1592]: time="2025-09-09T00:33:40.936238008Z" level=info msg="TearDown network for sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\" successfully" Sep 9 00:33:40.936632 containerd[1592]: time="2025-09-09T00:33:40.936273545Z" level=info msg="StopPodSandbox for \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\" returns successfully" Sep 9 00:33:40.941406 containerd[1592]: time="2025-09-09T00:33:40.939614887Z" level=info msg="RemovePodSandbox for \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\"" Sep 9 00:33:40.941406 containerd[1592]: time="2025-09-09T00:33:40.939655986Z" level=info msg="Forcibly stopping sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\"" Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:40.974 [WARNING][6136] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" WorkloadEndpoint="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:40.974 [INFO][6136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:40.974 [INFO][6136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" iface="eth0" netns="" Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:40.974 [INFO][6136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:40.974 [INFO][6136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:40.998 [INFO][6144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" HandleID="k8s-pod-network.28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Workload="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:40.998 [INFO][6144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:40.998 [INFO][6144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:41.003 [WARNING][6144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" HandleID="k8s-pod-network.28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Workload="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:41.003 [INFO][6144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" HandleID="k8s-pod-network.28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Workload="localhost-k8s-whisker--864684cb86--jwk6q-eth0" Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:41.004 [INFO][6144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:33:41.009623 containerd[1592]: 2025-09-09 00:33:41.006 [INFO][6136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110" Sep 9 00:33:41.010495 containerd[1592]: time="2025-09-09T00:33:41.009678784Z" level=info msg="TearDown network for sandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\" successfully" Sep 9 00:33:41.013911 containerd[1592]: time="2025-09-09T00:33:41.013876520Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:33:41.013965 containerd[1592]: time="2025-09-09T00:33:41.013933839Z" level=info msg="RemovePodSandbox \"28219d814e8abef26e2c4210e72e52387ebc3b0ff691430eaba229212e0d1110\" returns successfully" Sep 9 00:33:45.106653 systemd[1]: Started sshd@14-10.0.0.108:22-10.0.0.1:42650.service - OpenSSH per-connection server daemon (10.0.0.1:42650). Sep 9 00:33:45.197551 sshd[6156]: Accepted publickey for core from 10.0.0.1 port 42650 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:33:45.199396 sshd[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:45.204467 systemd-logind[1548]: New session 15 of user core. Sep 9 00:33:45.214774 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:33:45.338001 sshd[6156]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:45.342970 systemd[1]: sshd@14-10.0.0.108:22-10.0.0.1:42650.service: Deactivated successfully. Sep 9 00:33:45.346274 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:33:45.346501 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:33:45.347963 systemd-logind[1548]: Removed session 15. Sep 9 00:33:48.523768 kubelet[2703]: E0909 00:33:48.523693 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:50.351799 systemd[1]: Started sshd@15-10.0.0.108:22-10.0.0.1:38648.service - OpenSSH per-connection server daemon (10.0.0.1:38648). Sep 9 00:33:50.401826 sshd[6194]: Accepted publickey for core from 10.0.0.1 port 38648 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:33:50.404142 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:50.408686 systemd-logind[1548]: New session 16 of user core. Sep 9 00:33:50.415759 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:33:50.524129 kubelet[2703]: E0909 00:33:50.524067 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:33:50.551451 sshd[6194]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:50.555618 systemd[1]: sshd@15-10.0.0.108:22-10.0.0.1:38648.service: Deactivated successfully. Sep 9 00:33:50.558158 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:33:50.558184 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:33:50.559329 systemd-logind[1548]: Removed session 16. Sep 9 00:33:55.564049 systemd[1]: Started sshd@16-10.0.0.108:22-10.0.0.1:38658.service - OpenSSH per-connection server daemon (10.0.0.1:38658). Sep 9 00:33:55.599874 sshd[6254]: Accepted publickey for core from 10.0.0.1 port 38658 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:33:55.602002 sshd[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:33:55.607513 systemd-logind[1548]: New session 17 of user core. Sep 9 00:33:55.612933 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:33:55.813798 sshd[6254]: pam_unix(sshd:session): session closed for user core Sep 9 00:33:55.820147 systemd[1]: sshd@16-10.0.0.108:22-10.0.0.1:38658.service: Deactivated successfully. Sep 9 00:33:55.822714 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:33:55.822789 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:33:55.824917 systemd-logind[1548]: Removed session 17. Sep 9 00:33:57.523349 kubelet[2703]: E0909 00:33:57.523311 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:34:00.826782 systemd[1]: Started sshd@17-10.0.0.108:22-10.0.0.1:57084.service - OpenSSH per-connection server daemon (10.0.0.1:57084). Sep 9 00:34:00.874045 sshd[6274]: Accepted publickey for core from 10.0.0.1 port 57084 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:34:00.876527 sshd[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:00.883804 systemd-logind[1548]: New session 18 of user core. Sep 9 00:34:00.886868 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:34:01.167356 sshd[6274]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:01.174681 systemd[1]: Started sshd@18-10.0.0.108:22-10.0.0.1:57100.service - OpenSSH per-connection server daemon (10.0.0.1:57100). Sep 9 00:34:01.175322 systemd[1]: sshd@17-10.0.0.108:22-10.0.0.1:57084.service: Deactivated successfully. Sep 9 00:34:01.178518 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:34:01.180947 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:34:01.182833 systemd-logind[1548]: Removed session 18. Sep 9 00:34:01.212935 sshd[6286]: Accepted publickey for core from 10.0.0.1 port 57100 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:34:01.215152 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:01.219908 systemd-logind[1548]: New session 19 of user core. Sep 9 00:34:01.230782 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:34:01.551658 sshd[6286]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:01.563936 systemd[1]: Started sshd@19-10.0.0.108:22-10.0.0.1:57106.service - OpenSSH per-connection server daemon (10.0.0.1:57106). Sep 9 00:34:01.565971 systemd[1]: sshd@18-10.0.0.108:22-10.0.0.1:57100.service: Deactivated successfully. Sep 9 00:34:01.568605 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:34:01.571221 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:34:01.572667 systemd-logind[1548]: Removed session 19. Sep 9 00:34:01.608661 sshd[6302]: Accepted publickey for core from 10.0.0.1 port 57106 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:34:01.610804 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:01.616141 systemd-logind[1548]: New session 20 of user core. Sep 9 00:34:01.624726 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:34:03.272643 sshd[6302]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:03.289632 systemd[1]: Started sshd@20-10.0.0.108:22-10.0.0.1:57118.service - OpenSSH per-connection server daemon (10.0.0.1:57118). Sep 9 00:34:03.291157 systemd[1]: sshd@19-10.0.0.108:22-10.0.0.1:57106.service: Deactivated successfully. Sep 9 00:34:03.295532 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:34:03.296960 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:34:03.298314 systemd-logind[1548]: Removed session 20. Sep 9 00:34:03.342654 sshd[6321]: Accepted publickey for core from 10.0.0.1 port 57118 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:34:03.344809 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:03.349137 systemd-logind[1548]: New session 21 of user core. Sep 9 00:34:03.354827 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:34:03.863497 sshd[6321]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:03.870729 systemd[1]: Started sshd@21-10.0.0.108:22-10.0.0.1:57126.service - OpenSSH per-connection server daemon (10.0.0.1:57126). Sep 9 00:34:03.871270 systemd[1]: sshd@20-10.0.0.108:22-10.0.0.1:57118.service: Deactivated successfully. Sep 9 00:34:03.879653 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:34:03.881031 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:34:03.883654 systemd-logind[1548]: Removed session 21. Sep 9 00:34:03.914471 sshd[6335]: Accepted publickey for core from 10.0.0.1 port 57126 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:34:03.916281 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:03.922017 systemd-logind[1548]: New session 22 of user core. Sep 9 00:34:03.925827 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:34:04.053808 sshd[6335]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:04.058627 systemd[1]: sshd@21-10.0.0.108:22-10.0.0.1:57126.service: Deactivated successfully. Sep 9 00:34:04.062065 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:34:04.062390 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:34:04.063914 systemd-logind[1548]: Removed session 22. Sep 9 00:34:09.063812 systemd[1]: Started sshd@22-10.0.0.108:22-10.0.0.1:57140.service - OpenSSH per-connection server daemon (10.0.0.1:57140). Sep 9 00:34:09.099019 sshd[6355]: Accepted publickey for core from 10.0.0.1 port 57140 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:34:09.100825 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:09.104908 systemd-logind[1548]: New session 23 of user core. Sep 9 00:34:09.115764 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:34:09.240479 sshd[6355]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:09.249133 systemd[1]: sshd@22-10.0.0.108:22-10.0.0.1:57140.service: Deactivated successfully. Sep 9 00:34:09.250513 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:34:09.253185 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:34:09.254977 systemd-logind[1548]: Removed session 23. Sep 9 00:34:14.266988 systemd[1]: Started sshd@23-10.0.0.108:22-10.0.0.1:43220.service - OpenSSH per-connection server daemon (10.0.0.1:43220). Sep 9 00:34:14.328022 sshd[6375]: Accepted publickey for core from 10.0.0.1 port 43220 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:34:14.330754 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:14.345915 systemd-logind[1548]: New session 24 of user core. Sep 9 00:34:14.358271 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:34:14.652976 sshd[6375]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:14.657755 systemd[1]: sshd@23-10.0.0.108:22-10.0.0.1:43220.service: Deactivated successfully. Sep 9 00:34:14.662903 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:34:14.664524 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:34:14.666022 systemd-logind[1548]: Removed session 24. Sep 9 00:34:17.522719 kubelet[2703]: E0909 00:34:17.522531 2703 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:34:19.670691 systemd[1]: Started sshd@24-10.0.0.108:22-10.0.0.1:43224.service - OpenSSH per-connection server daemon (10.0.0.1:43224). Sep 9 00:34:19.700972 sshd[6414]: Accepted publickey for core from 10.0.0.1 port 43224 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:34:19.702793 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:19.706810 systemd-logind[1548]: New session 25 of user core. Sep 9 00:34:19.718713 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:34:19.911168 sshd[6414]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:19.916078 systemd[1]: sshd@24-10.0.0.108:22-10.0.0.1:43224.service: Deactivated successfully. Sep 9 00:34:19.918807 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:34:19.920026 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:34:19.921031 systemd-logind[1548]: Removed session 25. Sep 9 00:34:23.941088 systemd[1]: run-containerd-runc-k8s.io-f64323706c6864ad45c22740f6b602196a6b63a6e4f8352ec6869fbd0a6b68d2-runc.KT0aT0.mount: Deactivated successfully. Sep 9 00:34:24.925740 systemd[1]: Started sshd@25-10.0.0.108:22-10.0.0.1:51042.service - OpenSSH per-connection server daemon (10.0.0.1:51042). Sep 9 00:34:24.957548 sshd[6474]: Accepted publickey for core from 10.0.0.1 port 51042 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:34:24.959689 sshd[6474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:34:24.964103 systemd-logind[1548]: New session 26 of user core. Sep 9 00:34:24.970831 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:34:25.081977 sshd[6474]: pam_unix(sshd:session): session closed for user core Sep 9 00:34:25.087022 systemd[1]: sshd@25-10.0.0.108:22-10.0.0.1:51042.service: Deactivated successfully. Sep 9 00:34:25.090216 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:34:25.091487 systemd-logind[1548]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:34:25.092520 systemd-logind[1548]: Removed session 26.